[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-fjchange--awesome-video-anomaly-detection":3,"tool-fjchange--awesome-video-anomaly-detection":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":82,"owner_twitter":83,"owner_website":84,"owner_url":85,"languages":83,"stars":86,"forks":87,"last_commit_at":88,"license":83,"difficulty_score":89,"env_os":90,"env_gpu":91,"env_ram":91,"env_deps":92,"category_tags":95,"github_topics":96,"view_count":23,"oss_zip_url":83,"oss_zip_packed_at":83,"status":16,"created_at":106,"updated_at":107,"faqs":108,"releases":144},2060,"fjchange\u002Fawesome-video-anomaly-detection","awesome-video-anomaly-detection","Papers for Video Anomaly Detection, released codes collection, Performance Comparision.","awesome-video-anomaly-detection 是一个专注于视频异常检测领域的开源资源合集，旨在为研究者和开发者提供一站式的学术导航。它系统性地整理了该方向的高质量学术论文、已公开的代码实现以及详细的性能对比数据，覆盖了从早期经典模型到 AAAI、CVPR 等顶级会议的最新研究成果。\n\n在智能监控、交通管理等场景中，如何从海量视频里自动识别打架、车祸或违规闯入等罕见“异常”行为是一大技术难点。由于异常样本稀缺且形态多变，传统方法往往难以奏效。awesome-video-anomaly-detection 通过汇聚无监督学习、弱监督学习等多种技术路线的资源，帮助用户快速复现前沿算法，规避重复造轮子的困境，从而加速新模型的验证与迭代。\n\n这份清单特别适合计算机视觉领域的研究人员、算法工程师以及相关专业的学生使用。其独特亮点在于不仅罗列了 UCF-Crime、ShanghaiTech 等主流数据集的下载链接，还细致地标注了基于骨骼点、开放集（Open-Set）等特定技术场景的细分资源，甚至涵盖了行车记录仪事故预测等垂直领域。无论是想要入门该领域的新手，还是寻求最新技术突破的资","awesome-video-anomaly-detection 是一个专注于视频异常检测领域的开源资源合集，旨在为研究者和开发者提供一站式的学术导航。它系统性地整理了该方向的高质量学术论文、已公开的代码实现以及详细的性能对比数据，覆盖了从早期经典模型到 AAAI、CVPR 等顶级会议的最新研究成果。\n\n在智能监控、交通管理等场景中，如何从海量视频里自动识别打架、车祸或违规闯入等罕见“异常”行为是一大技术难点。由于异常样本稀缺且形态多变，传统方法往往难以奏效。awesome-video-anomaly-detection 通过汇聚无监督学习、弱监督学习等多种技术路线的资源，帮助用户快速复现前沿算法，规避重复造轮子的困境，从而加速新模型的验证与迭代。\n\n这份清单特别适合计算机视觉领域的研究人员、算法工程师以及相关专业的学生使用。其独特亮点在于不仅罗列了 UCF-Crime、ShanghaiTech 等主流数据集的下载链接，还细致地标注了基于骨骼点、开放集（Open-Set）等特定技术场景的细分资源，甚至涵盖了行车记录仪事故预测等垂直领域。无论是想要入门该领域的新手，还是寻求最新技术突破的资深专家，都能从中获得极具价值的参考指引。","# awesome-video-anomaly-detection  [![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\nPapers for Video Anomaly Detection, released codes collections.\n\nAny addition or bug please open an issue, pull requests or e-mail me by `fjchange@hotmail.com ` \n\n## Recent Updated\n- AAAI 2022\n- CVPR 2022\n\n## Datasets\n0. UMN [`Download link`](http:\u002F\u002Fmha.cs.umn.edu\u002F)\n1. UCSD [`Download link`](http:\u002F\u002Fwww.svcl.ucsd.edu\u002Fprojects\u002Fanomaly\u002Fdataset.html)\n2. Subway Entrance\u002FExit [`Download link`](http:\u002F\u002Fvision.eecs.yorku.ca\u002Fresearch\u002Fanomalous-behaviour-data\u002F)\n3. CUHK Avenue [`Download link`](http:\u002F\u002Fwww.cse.cuhk.edu.hk\u002Fleojia\u002Fprojects\u002Fdetectabnormal\u002Fdataset.html)\n    - HD-Avenue \u003Cspan id = \"05\">[Skeleton-based](#01902)\u003C\u002Fspan>\n4. ShanghaiTech [`Download link`](https:\u002F\u002Fsvip-lab.github.io\u002Fdataset\u002Fcampus_dataset.html)\n    - HD-ShanghaiTech \u003Cspan id = \"00\">[Skeleton-based](#01902)\u003C\u002Fspan>\n5. UCF-Crime (Weakly Supervised)\n    - UCFCrime2Local (subset of UCF-Crime but with spatial annotations.) [`Download_link`](http:\u002F\u002Fimagelab.ing.unimore.it\u002FUCFCrime2Local), \u003Cspan id = \"01\">[Ano-Locality](#21902)\u003C\u002Fspan>\n    - Spatial Temporal Annotations [`Download_link`](https:\u002F\u002Fgithub.com\u002Fxuzero\u002FUCFCrime_BoundingBox_Annotation) \u003Cspan id = \"02\">[Background-Bias](#21901)\u003C\u002Fspan>\n6. Traffic-Train\n7. Belleview\n8. Street Scene (WACV 2020) \u003Cspan id = \"03\">[Street Scenes](#02001)\u003C\u002Fspan>, [`Download link`](https:\u002F\u002Fwww.merl.com\u002Fdemos\u002Fvideo-anomaly-detection)\n9. IITB-Corridor (WACV 2020) \u003Cspan id = \"04\">[Rodrigurs.etl](#02002)\u003C\u002Fspan>\n10. XD-Violence (ECCV 2020) \u003Cspan id ='05'>[XD-Violence](#12003)\u003C\u002Fspan>[`Download link`](https:\u002F\u002Froc-ng.github.io\u002FXD-Violence\u002F)\n11. ADOC (ACCV 2020) \u003Cspan id ='06'>[ADOC](#02012)\u003C\u002Fspan>[`Download_link`](http:\u002F\u002Fqil.uh.edu\u002Fmain\u002Fdatasets\u002F)\n12. UBnormal (CVPR 2022) \u003Cspan id='07'>[UBnormal] [`Project Link`](https:\u002F\u002Fgithub.com\u002Flilygeorgescu\u002FUBnormal) `Open-Set`\n\n__The Datasets belowed are about Traffic Accidents Anticipating in Dashcam videos or Surveillance videos__\n\n1. CADP [(CarCrash Accidents Detection and Prediction)](https:\u002F\u002Fgithub.com\u002Fankitshah009\u002FCarCrash_forecasting_and_detection)\n2. DAD  [paper](https:\u002F\u002Fyuxng.github.io\u002Fchan_accv16.pdf), [`Download link`](https:\u002F\u002Faliensunmin.github.io\u002Fproject\u002Fdashcam\u002F)\n3. A3D  [paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.00618?), [`Download link`](https:\u002F\u002Fgithub.com\u002FMoonBlvd\u002Ftad-IROS2019)\n4. DADA  [`Download link`](https:\u002F\u002Fgithub.com\u002FJWFangit\u002FLOTVS-DADA)\n5. DoTA   [`Download_link`](https:\u002F\u002Fgithub.com\u002FMoonBlvd\u002FDetection-of-Traffic-Anomaly)\n6. Iowa DOT [`Download_link`](https:\u002F\u002Fwww.aicitychallenge.org\u002F2018-ai-city-challenge\u002F)\n\n\n1. Driver_Anomaly [Project_link](https:\u002F\u002Fgithub.com\u002Fokankop\u002FDriver-Anomaly-Detection)\n-----\n## Unsupervised\n### 2016\n1. \u003Cspan id = \"01601\">[Conv-AE]\u003C\u002Fspan> [Learning Temporal Regularity in Video Sequences](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FHasan_Learning_Temporal_Regularity_CVPR_2016_paper.pdf), `CVPR 16`. [Code](https:\u002F\u002Fgithub.com\u002Fiwyoo\u002FTemporalRegularityDetector-tensorflow\u002Fblob\u002Fmaster\u002Fmodel.py)\n### 2017\n1. \u003Cspan id = \"01701\">[Hinami.etl]\u003C\u002Fspan> [Joint Detection and Recounting of Abnormal Events by Learning Deep Generic Knowledge](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FHinami_Joint_Detection_and_ICCV_2017_paper.pdf), `ICCV 2017`. (Explainable VAD)\n2. \u003Cspan id = \"01702\">[Stacked-RNN]\u003C\u002Fspan> [A revisit of sparse coding based anomaly detection in stacked rnn framework](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FLuo_A_Revisit_of_ICCV_2017_paper.pdf), `ICCV 2017`. [code](https:\u002F\u002Fgithub.com\u002FStevenLiuWen\u002FsRNN_TSC_Anomaly_Detection)\n3. \u003Cspan id = \"01703\">[ConvLSTM-AE]\u003C\u002Fspan> [Remembering history with convolutional LSTM for anomaly detection](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8019325), `ICME 2017`.[Code](https:\u002F\u002Fgithub.com\u002Fzachluo\u002Fconvlstm_anomaly_detection)\n4. \u003Cspan id = \"01704\">[Conv3D-AE]\u003C\u002Fspan> [Spatio-Temporal AutoEncoder for Video Anomaly Detection](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3123266.3123451),`ACM MM 17`.\n5. \u003Cspan id = \"01705\">[Unmasking]\u003C\u002Fspan> [Unmasking the abnormal events in video](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FIonescu_Unmasking_the_Abnormal_ICCV_2017_paper.pdf), `ICCV 17`.\n6. \u003Cspan id = \"01706\">[DeepAppearance]\u003C\u002Fspan> [Deep appearance features for abnormal behavior detection in video](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FRadu_Tudor_Ionescu\u002Fpublication\u002F320361315_Deep_Appearance_Features_for_Abnormal_Behavior_Detection_in_Video\u002Flinks\u002F5a469e9fa6fdcce1971b7258\u002FDeep-Appearance-Features-for-Abnormal-Behavior-Detection-in-Video.pdf)\n### 2018\n1. \u003Cspan id = \"01801\">[FramePred]\u003C\u002Fspan> [Future Frame Prediction for Anomaly Detection -- A New Baseline](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_Future_Frame_Prediction_CVPR_2018_paper.pdf), `CVPR 2018`. [code](https:\u002F\u002Fgithub.com\u002FStevenLiuWen\u002Fano_pred_cvpr2018)\n2. \u003Cspan id = \"01802\">[ALOOC]\u003C\u002Fspan> [Adversarially Learned One-Class Classifier for Novelty Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSabokrou_Adversarially_Learned_One-Class_CVPR_2018_paper.pdf), `CVPR 2018`. [code](https:\u002F\u002Fgithub.com\u002Fkhalooei\u002FALOCC-CVPR2018)\n3. [Detecting Abnormality Without Knowing Normality: A Two-stage Approach for Unsupervised Video Abnormal Event Detection](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3240508.3240615), `ACM MM 18`.\n\n### 2019\n1. \u003Cspan id = \"01901\">[Mem-AE]\u003C\u002Fspan> [Memorizing Normality to Detect Anomaly: Memory-augmented Deep Autoencoder for Unsupervised Anomaly Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FGong_Memorizing_Normality_to_Detect_Anomaly_Memory-Augmented_Deep_Autoencoder_for_Unsupervised_ICCV_2019_paper.pdf), `ICCV 2019`.[code](https:\u002F\u002Fgithub.com\u002Fdonggong1\u002Fmemae-anomaly-detection)\n2. \u003Cspan id = \"01902\">[Skeleton-based]\u003C\u002Fspan> [Learning Regularity in Skeleton Trajectories for Anomaly Detection in Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FMorais_Learning_Regularity_in_Skeleton_Trajectories_for_Anomaly_Detection_in_Videos_CVPR_2019_paper.pdf), `CVPR 2019`.[code](https:\u002F\u002Fgithub.com\u002FRomeroBarata\u002Fskeleton_based_anomaly_detection)\n3. \u003Cspan id = \"01903\">[Object-Centric]\u003C\u002Fspan> [Object-Centric Auto-Encoders and Dummy Anomalies for Abnormal Event Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FIonescu_Object-Centric_Auto-Encoders_and_Dummy_Anomalies_for_Abnormal_Event_Detection_in_CVPR_2019_paper.pdf), `CVPR 2019`.\n4. \u003Cspan id = \"01904\">[Appearance-Motion Correspondence]\u003C\u002Fspan> [Anomaly Detection in Video Sequence with Appearance-Motion Correspondence](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FNguyen_Anomaly_Detection_in_Video_Sequence_With_Appearance-Motion_Correspondence_ICCV_2019_paper.pdf), `ICCV 2019`.[code](https:\u002F\u002Fgithub.com\u002Fnguyetn89\u002FAnomaly_detection_ICCV2019)\n5. \u003Cspan id = \"01905\">[AnoPCN]\u003C\u002Fspan>[AnoPCN: Video Anomaly Detection via Deep Predictive Coding Network](https:\u002F\u002Fpeople.cs.clemson.edu\u002F~jzwang\u002F20018630\u002Fmm2019\u002Fp1805-ye.pdf), ACM MM 2019.\n### 2020\n1. \u003Cspan id = \"02001\">[Street-Scene]\u003C\u002Fspan> [Street Scene: A new dataset and evaluation protocol for video anomaly detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_WACV_2020\u002Fpapers\u002FRamachandra_Street_Scene_A_new_dataset_and_evaluation_protocol_for_video_WACV_2020_paper.pdf), `WACV 2020`.\n2. \u003Cspan id = \"02002\">[Rodrigurs.etl])\u003C\u002Fspan> [Multi-timescale Trajectory Prediction for Abnormal Human Activity Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_WACV_2020\u002Fpapers\u002FRodrigues_Multi-timescale_Trajectory_Prediction_for_Abnormal_Human_Activity_Detection_WACV_2020_paper.pdf), `WACV 2020`.\n3. \u003Cspan id = \"02003\">[GEPC]\u003C\u002Fspan> [Graph Embedded Pose Clustering for Anomaly Detection](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.11850.pdf), `CVPR 2020`.[code](https:\u002F\u002Fgithub.com\u002Famirmk89\u002Fgepc)\n4. \u003Cspan id = \"02004\">[Self-trained]\u003C\u002Fspan> [Self-trained Deep Ordinal Regression for End-to-End Video Anomaly Detection](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.06780.pdf), `CVPR 2020`. \n5. \u003Cspan id = \"02005\">[MNAD]\u003C\u002Fspan> [Learning Memory-guided Normality for Anomaly Detection](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.13228.pdf), `CVPR 2020`. [code](https:\u002F\u002Fcvlab.yonsei.ac.kr\u002Fprojects\u002FMNAD)\n6. \u003Cspan id = \"02006\">[Continual-AD]]\u003C\u002Fspan> [Continual Learning for Anomaly Detection in Surveillance Videos](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.07941),`CVPR 2020 Worksop.`\n7. \u003Cspan id = \"02007\">[OGNet]\u003C\u002Fspan> [Old is Gold: Redefining the Adversarially Learned One-Class Classifier Training Paradigm](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZaheer_Old_Is_Gold_Redefining_the_Adversarially_Learned_One-Class_Classifier_Training_CVPR_2020_paper.pdf), `CVPR 2020`. [code](https:\u002F\u002Fgithub.com\u002Fxaggi\u002FOGNet)\n8. \u003Cspan id = \"02008\">[Any-Shot]\u003C\u002Fspan> [Any-Shot Sequential Anomaly Detection in Surveillance Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2020\u002Fpapers\u002Fw54\u002FDoshi_Any-Shot_Sequential_Anomaly_Detection_in_Surveillance_Videos_CVPRW_2020_paper.pdf),`CVPR 2020 workshop`.\n9. \u003Cspan id = \"02009\">[Few-Shot]\u003C\u002Fspan>[Few-Shot Scene-Adaptive Anomaly Detection](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.07843.pdf)`ECCV 2020 Spotlight` [code](https:\u002F\u002Fgithub.com\u002Fyiweilu3\u002FFew-shot-Scene-adaptive-Anomaly-Detection)\n10. \u003Cspan id = \"02010\">[CDAE]\u003C\u002Fspan>[Clustering-driven Deep Autoencoder for Video Anomaly Detection](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123600324.pdf)`ECCV 2020`\n11. \u003Cspan id = \"02011\">[VEC]\u003C\u002Fspan>[Cloze Test Helps: Effective Video Anomaly Detection via Learning to Complete Video Events](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.11988)`ACM MM 2020 Oral` [code](https:\u002F\u002Fgithub.com\u002Fyuguangnudt\u002FVEC_VAD)\n12. \u003Cspan id ='02012'>[ADOC]\u003C\u002Fspan>[A Day on Campus - An Anomaly Detection Dataset for Events in a Single Camera] `ACCV 2020`\n13. \u003Cspan id ='02013'>[CAC]\u003C\u002Fspan>[Cluster Attention Contrast for Video Anomaly Detection](http:\u002F\u002Fweb.pkusz.edu.cn\u002Fadsp\u002Ffiles\u002F2020\u002F08\u002FCluster_Attention_Contrast_for_Video_Anomaly_Detection.pdf) `ACM MM 2020`\n14. \u003Cspan id ='02014'>[STC-Graph]\u003C\u002Fspan>[Scene-Aware Context Reasoning for Unsupervised Abnormal Event Detection in Videos](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3394171.3413887) `ACM MM 2020`\n\n### 2021\n1. \u003Cspan id ='02101'>[AMCM]\u003C\u002Fspan>[Appearance-Motion Memory Consistency Network for Video Anomaly Detection](https:\u002F\u002Fwww.aaai.org\u002FAAAI21Papers\u002FAAAI-4120.CaiR.pdf) `AAAI 2021`\n2. \u003Cspan id='02102'>[SSMT,Self-Supervised-Multi-Task]\u003C\u002Fspan>[Anomaly Detection in Video via Self-Supervised and Multi-Task Learning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.07491.pdf) `CVPR 2021`\n3. \u003Cspan id='02103'>[HF2-VAD]\u003C\u002Fspan>[A Hybrid Video Anomaly Detection Framework via Memory-Augmented Flow Reconstruction and Flow-Guided Frame Prediction](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.06852.pdf)`ICCV 2021 Oral`\n4. \u003Cspan id='02104'>[ROADMAP]\u003C\u002Fspan>[Robust Unsupervised Video Anomaly Detection by Multipath Frame Prediction](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.02763)`TNNLS 2021`\n5. \u003Cspan id='02105'>[AEP]\u003C\u002Fspan>[Abnormal Event Detection and Localization via Adversarial Event Prediction](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9346050\u002F) `TNNLS 2021`\n\n### 2022\n1. \u003Cspan id='02201'>[Casual]\u003C\u002Fspan>[A Causal Inference Look At Unsupervised Video Anomaly Detection](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-37.LinX.pdf)`AAAI 2022`\n2. \u003Cspan id='02202'>[BDPN]\u003C\u002Fspan>[Comprehensive Regularization in a Bi-directional Predictive Network for Video Anomaly Detection](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-470.ChenC.pdf)`AAAI 2022`\n3. \u003Cspan id='02203'>[GCL]\u003C\u002Fspan>[Generative Cooperative Learning for Unsupervised Video Anomaly Detection](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.03962.pdf)`CVPR 2022`\n\n## Weakly-Supervised\n### 2018\n1. \u003Cspan id = \"11801\">[Sultani.etl]\u003C\u002Fspan> [Real-world Anomaly Detection in Surveillance Videos](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSultani_Real-World_Anomaly_Detection_CVPR_2018_paper.pdf), `CVPR 2018` [code](https:\u002F\u002Fgithub.com\u002FWaqasSultani\u002FAnomalyDetectionCVPR2018)\n### 2019\n1. \u003Cspan id = \"11901\">[GCN-Anomaly]\u003C\u002Fspan> [Graph Convolutional Label Noise Cleaner:Train a Plug-and-play Action Classifier for Anomaly Detection](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FZhong_Graph_Convolutional_Label_Noise_Cleaner_Train_a_Plug-And-Play_Action_Classifier_CVPR_2019_paper.pdf),` CVPR 2019`, \n[code](https:\u002F\u002Fgithub.com\u002Fjx-zhong-for-academic-purpose\u002FGCN-Anomaly-Detection)\n2. \u003Cspan id = \"11902\">[MLEP]\u003C\u002Fspan> [Margin Learning Embedded Prediction for Video Anomaly Detection with A Few Anomalies](https:\u002F\u002Fpdfs.semanticscholar.org\u002Fe878\u002F6acbfabaf4938c9c8e2d3a15e0f110a1ec7f.pdf), `IJCAI 2019`[code](https:\u002F\u002Fgithub.com\u002Fsvip-lab\u002FMLEP).\n3. \u003Cspan id = \"11903\">[IBL]\u003C\u002Fspan> [Temporal Convolutional Network with Complementary Inner Bag Loss For Weakly Supervised Anomaly Detection](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8803657\u002F). `ICIP 19`.\n4. \u003Cspan id = \"11904\">[Motion-Aware]\u003C\u002Fspan> [Motion-Aware Feature for Improved Video Anomaly Detection](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1907.10211). `BMVC 19`.\n### 2020\n1. \u003Cspan id = \"12001\">[Siamese]\u003C\u002Fspan> [Learning a distance function with a Siamese network to localize anomalies in videos](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.09189), `WACV 2020`.\n2. \u003Cspan id = \"12002\">[AR-Net]\u003C\u002Fspan> [Weakly Supervised Video Anomaly Detection via Center-Guided Discrimative Learning](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9102722),` ICME 2020`.[code](https:\u002F\u002Fgithub.com\u002Fwanboyang\u002FAnomaly_AR_Net_ICME_2020)\n3. \u003Cspan id ='12003'>['XD-Violence']\u003C\u002Fspan> [Not only Look, but also Listen: Learning Multimodal Violence Detection under Weak Supervision](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.04687.pdf) `ECCV 2020`\n4. \u003Cspan id ='12004'>[CLAWS]\u003C\u002Fspan> [CLAWS: Clustering Assisted Weakly Supervised Learning with Normalcy Suppression for Anomalous Event Detection](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123670358.pdf) `ECCV 2020`\n### 2021\n1. \u003Cspan id=\"12101\">[MIST]\u003C\u002Fspan> [MIST: Multiple Instance Self-Training Framework for Video Anomaly Detection](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.01633) `CVPR 2021` [Project Page](https:\u002F\u002Fkiwi-fung.win\u002F2021\u002F04\u002F28\u002FMIST\u002F)\n2. \u003Cspan id='12102'>[RTFM]\u003C\u002Fspan> [Weakly-supervised Video Anomaly Detection with Contrastive Learning of\nLong and Short-range Temporal Features](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.10030.pdf) `ICCV 2021`[Code](https:\u002F\u002Fgithub.com\u002Ftianyu0207\u002FRTFM)\n3. \u003Cspa id='12103'>[STAD]\u003C\u002Fspan>[Weakly-Supervised Spatio-Temporal Anomaly Detection in Surveillance Video](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.03825) `IJCAI 2021`\n4. \u003Cspan id='12104'>[WSAL]\u003C\u002Fspan>[Localizing Anomalies From Weakly-Labeled Videos](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.08944)`TIP 2021` [Code](https:\u002F\u002Fgithub.com\u002Fktr-hubrt\u002FWSAL)\n5. \u003Cspan id='12105'>[CRFD]\u003C\u002Fspan>[Learning Causal Temporal Relation and Feature Discrimination for Anomaly Detection](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9369126\u002F)`TIP 2021`\n### 2022\n1. \u003Cspan id='12201'>[MSL]\u003C\u002Fspan>[Self-Training Multi-Sequence Learning with Transformer for Weakly Supervised Video Anomaly Detection](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-6637.LiS.pdf)`AAAI 2022`\n\n## Supervised\n### 2019\n1. \u003Cspan id = \"21901\">[Background-Bias]\u003C\u002Fspan>[Exploring Background-bias for Anomaly Detection in Surveillance Videos](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3343031.3350998), `ACM MM 19`.\n2. \u003Cspan id = \"21902\">[Ano-Locality]\u003C\u002Fspan>[Anomaly locality in video suveillance](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1901.10364).\n\n## Others\n### 2020\n1. \u003Cspan id =\"62001\">[Few-Shot]\u003C\u002Fspan>[Few-Shot Scene-Adaptive Anomaly Detection](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.07843) `ECCV 2020`[code](https:\u002F\u002Fgithub.com\u002Fyiweilu3\u002FFew-shot-Scene-adaptive-Anomaly-Detection)\n------\n## Reviews \u002F Surveys\n1. An Overview of Deep Learning Based Methods for Unsupervised and Semi-Supervised Anomaly Detection in Videos, J. Image, 2018.[page](https:\u002F\u002Fbeedotkiran.github.io\u002FVideoAnomaly.html)\n2. DEEP LEARNING FOR ANOMALY DETECTION: A SURVEY, [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1901.03407.pdf)\n3. Video Anomaly Detection for Smart Surveillance [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.00222.pdf)\n4.  A survey of single-scene video anomaly detection, `TPAMI 2020` [paper](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.05993.pdf).\n\n\n## Books\n1. Outlier Analysis. Charu C. Aggarwal\n## Specific Scene\n\n------\n\nGenerally, anomaly detection in recent researches are based on the datasets from pedestrian (likes UCSD, Avenue, ShanghaiTech, etc.)， or UCF-Crime (real-world anomaly).\nHowever some focus on specific scene as follows.\n\n### Traffic\nCVPR  workshop, AI City Challenge series.\n#### \tFirst-Person Traffic\n​\t\tUnsupervised Traffic Accident Detection in First-Person Videos, IROS 2019.\n\n#### \tDriving\n\n​\t\tWhen, Where, and What? A New Dataset for Anomaly Detection in Driving Videos. [github](https:\u002F\u002Fgithub.com\u002FMoonBlvd\u002FDetection-of-Traffic-Anomaly)\n\n### Old-man Fall Down\n\n### Fighting\u002FViolence\n1. Localization Guided Fight Action Detection in Surveillance Videos. ICME 2019.\n2. \n\n### Social\u002F Group Anomaly\n1. Social-BiGAT: Multimodal Trajectory Forecasting using Bicycle-GAN and Graph Attention Networks, Neurips 2019.\n\n## Related Topics:\n1. Video Representation (Unsupervised Video Representation, reconstruction, prediction etc.)\n2. Object Detection\n3. Pedestrian Detection\n4. Skeleton Detection\n5. Graph Neural Networks\n6. GAN\n7. Action Recognition \u002F Temporal Action Localization\n8. Metric Learning\n9. Label Noise Learning\n10. Cross-Modal\u002F Multi-Modal\n11. Dictionary Learning\n12. One-Class Classification \u002F Novelty Detection \u002F Out-of-Disturibution Detection\n13. Action Recognition.\n    - Human in Events: A Large-Scale Benchmark for Human-centric Video Analysis in Complex Events. ACM MM 2020 workshop.\n\n## Performance Evaluation Methods\n1. AUC\n2. PR-AUC\n3. Score Gap\n4. False Alarm Rate on Normal with 0.5 as threshold (Weakly supervised, proposed in CVPR 18)\n\nAs discussed in Issue [#12](https:\u002F\u002Fgithub.com\u002Ffjchange\u002Fawesome-video-anomaly-detection\u002Fissues\u002F12), the reported results below will be ``Micro-AUC”, if the paper provide ``Macro-AUC\", which will be tagged with `*`. \n    \n## Performance Comparison on UCF-Crime \n| Model                                               | Reported on Convference\u002FJournal | Supervised | Feature  | Encoder-based | 32 Segments | AUC (%) | FAR@0.5 on Normal (%) |\n| --------------------------------------------------- | ------------------------------- | ---------- | -------- | ------- | ----------- | ------- | --------------------- |\n| \u003Cspan id = \"31801\">[Sultani.etl](#11801)\u003C\u002Fspan>     | CVPR 18                         | Weakly     | C3D RGB  | X       | √           | 75.41   | 1.9                   |\n| \u003Cspan id = \"31903\">[IBL](#11903)\u003C\u002Fspan>             | ICIP 19                         | Weakly     | C3D RGB  | X       | √           | 78.66   | -                     |\n| \u003Cspan id = \"31904\">[Motion-Aware](#11904)\u003C\u002Fspan>    | BMVC 19                         | Weakly     | PWC Flow | X       | √           | 79.0    | -                     |\n| \u003Cspan id = \"31901\">[GCN-Anomaly](#11901)\u003C\u002Fspan>     | CVPR 19                         | Weakly     | TSN RGB  | √       | X           | 82.12   | 0.1                   |\n| \u003Cspan id = '32013'>[ST-Graph](#02014)\u003C\u002Fspan>        | ACM MM 20                       | Un         | -        | √       | X           | 72.7    |                       |\n| \u003Cspan id = \"31902\">[Background-Bias](#21901)\u003C\u002Fspan> | ACM MM 19                       | Fully      | NLN RGB  | √       | X           | 82.0    | -                     |\n| \u003Cspan id = \"31905\">[CLAWS](#12004)\u003C\u002Fspan>           | ECCV 20                         | Weakly     | C3D RGB  | √       | X           | 83.03   | -                     |\n| \u003Cspan id = \"32101\">[MIST](#12101)\u003C\u002Fspan>            | CVPR 21                         | Weakly     | I3D RGB  | √       | X           | 82.30   | 0.13                  |\n| \u003Cspan id = '32102'>[RTFM](#12102)\u003C\u002Fspan>            | ICCV 21                         | Weakly     | I3D RGB  | X       | √           | 84.03   | -                     |\n| \u003Cspan id = '32104'>[WSAL](#12104)\u003C\u002Fspan>            | TIP 21                          | Weakly     | I3D RGB  | X       | √           | 85.38   | -                     |\n| \u003Cspan id = '32104'>[CRFD](#12105)\u003C\u002Fspan>            | TIP 21                          | Weakly     | I3D RGB  | X       | √           | 84.89   | -                     |\n| \u003Cspan id = '32201_1'>[MSL](#12201)\u003C\u002Fspan>            | AAAI 22                          | Weakly     | C3D RGB  | √        | X           | 82.85   | -                     |\n| \u003Cspan id = '32201_2'>[MSL](#12202)\u003C\u002Fspan>            | AAAI 22                          | Weakly     | I3D RGB  | √        | X           | 85.30   | -                     |\n| \u003Cspan id = '32201_3'>[MSL](#12201)\u003C\u002Fspan>            | AAAI 22                          | Weakly     | VideoSwin-RGB  | √        | X           | 85.62   | -                     |\n| \u003Cspan id = '32203_1'>[GCL](#12203)\u003C\u002Fspan>            | CVPR 22                          | Weakly     | ResNext  | √        | X           | 79.84   | -                     |  \n| \u003Cspan id = '32203_2'>[GCL](#12203)\u003C\u002Fspan>            | CVPR 22                          | Un     | ResNext  | √        | X           | 71.04   | -                     |  \n## Performance Comparison on ShanghaiTech\n| Model                                             | Reported on Conference\u002FJournal | Supervision                   | Feature            | Encoder-based | AUC(%) | FAR@0.5 (%) |\n| ------------------------------------------------- | ------------------------------ | ----------------------------- | ------------------ | ------- | ------ | ----------- |\n| \u003Cspan id = \"41601\">[Conv-AE](#01601)\u003C\u002Fspan>       | CVPR 16                        | Un                            | -                  | √       | 60.85  | -           |\n| \u003Cspan id = \"41702\">[stacked-RNN](#01702)\u003C\u002Fspan>   | ICCV 17                        | Un                            | -                  | √       | 68.0   | -           |\n| \u003Cspan id = \"41801\">[FramePred](#01801)\u003C\u002Fspan>     | CVPR 18                        | Un                            | -                  | √       | 72.8   | -           |\n| \u003Cspan id = \"41902\">[FramePred*](#11902)\u003C\u002Fspan>    | IJCAI 19                       | Un                            | -                  | √       | 73.4   | -           |\n| \u003Cspan id = \"41901-1\">[Mem-AE](#01901)\u003C\u002Fspan>      | ICCV 19                        | Un                            | -                  | √       | 71.2   | -           |\n| \u003Cspan id = \"42005\">[MNAD](#02005)\u003C\u002Fspan>          | CVPR 20                        | Un                            | -                  | √       | 70.5   | -           |\n| \u003Cspan id = \"42011\">[VEC](#02011)\u003C\u002Fspan>           | ACM MM 20                      | Un                            | -                  | √       | 74.8   | -           |\n| \u003Cspan id ='42014'>[ST-Graph](#02014)\u003C\u002Fspan>       | ACM MM 20                      | Un                            | -                  | √       | 74.7   | -           |\n| \u003Cspan id = '42013'>[CAC](#02013)\u003C\u002Fspan>           | ACM MM 20                      | Un                            | -                  | √       | 79.3   |             |\n| \u003Cspan id='42101'>[AMMC](#02101)\u003C\u002Fspan>            | AAAI 21                        | Un                            | -                  | √       | 73.7   | -           |\n| \u003Cspan id='42102'>[SSMT](#02102)\u003C\u002Fspan>            | CVPR 21                        | Un                            | -                  | √       | 82.4   | -           |\n| \u003Cspan id='42103'>[HF2-VAD](#02103)\u003C\u002Fspan>         | ICCV 21                        | Un                            | -                  | √       | 76.2   | -           |\n| \u003Cspan id='42104'>[ROADMAP](#02104)\u003C\u002Fspan>         | TNNLS 21                       | Un                            | -                  | √       | 76.6   | -           |\n| \u003Cspan id='42202'>[BDPN](#02202)\u003C\u002Fspan>         | AAAI 22                       | Un                            | -                  | √       | 78.1   | -           |\n| \u003Cspan id = \"41902-1\">[MLEP](#11902)\u003C\u002Fspan>        | IJCAI 19                       | 10% test vids with Video Anno | -                  | √       | 75.6   | -           |\n| \u003Cspan id = \"41902-2\">[MLEP](#11902)\u003C\u002Fspan>        | IJCAI 19                       | 10% test vids with Frame Anno | -                  | √       | 76.8   | -           |\n| \u003Cspan id = \"42002-1\">[Sultani.etl](#12002)\u003C\u002Fspan> | ICME 2020                      | Weakly (Re-Organized Dataset) | C3D-RGB            | X       | 86.3   | 0.15        |\n| \u003Cspan id = \"42002-2\">[IBL](#12002)\u003C\u002Fspan>         | ICME 2020                      | Weakly (Re-Organized Dataset) | I3D-RGB            | X       | 82.5   | 0.10        |\n| \u003Cspan id = \"41901-2\">[GCN-Anomaly](#11901)\u003C\u002Fspan> | CVPR 19                        | Weakly (Re-Organized Dataset) | C3D-RGB            | √       | 76.44  | -           |\n| \u003Cspan id = \"41901-3\">[GCN-Anomaly](#11901)\u003C\u002Fspan> | CVPR 19                        | Weakly (Re-Organized Dataset) | TSN-Flow           | √       | 84.13  | -           |\n| \u003Cspan id = \"41901-4\">[GCN-Anomaly](#11901)\u003C\u002Fspan> | CVPR 19                        | Weakly (Re-Organized Dataset) | TSN-RGB            | √       | 84.44  | -           |\n| \u003Cspan id = \"42002\">[AR-Net](#12002)\u003C\u002Fspan>        | ICME 20                        | Weakly (Re-Organized Dataset) | I3D-RGB & I3D Flow | X       | 91.24  | 0.10        |\n| \u003Cspan id = \"42002\">[CLAWS](#12004)\u003C\u002Fspan>         | ECCV 20                        | Weakly (Re-Organized Dataset) | C3D-RGB            | √       | 89.67  |             |\n| \u003Cspan id='42101'>[MIST](#12101)\u003C\u002Fspan>            | CVPR 21                        | Weakly (Re-Organized Dataset) | I3D-RGB            | √       | 94.83  | 0.05        |\n| \u003Cspan id='42102'>[RTFM](#12102)\u003C\u002Fspan>            | ICCV 21                        | Weakly (Re-Organized Dataset) | I3D-RGB            | X       | 97.21  | -           |\n| \u003Cspan id='42102'>[CRFD](#12105)\u003C\u002Fspan>            | TIP 21                         | Weakly (Re-Organized Dataset) | I3D-RGB            | X       | 97.48  | -           |\n| \u003Cspan id='42201_0'>[MSL](#12201)\u003C\u002Fspan>            | AAAI 22                        | Weakly (Re-Organized Dataset) | C3D-RGB            | X       | 94.81  | -      |\n| \u003Cspan id='42201_1'>[MSL](#12201)\u003C\u002Fspan>            | AAAI 22                        | Weakly (Re-Organized Dataset) | I3D-RGB            | X       | 96.08  | -      |\n| \u003Cspan id='42201_1'>[MSL](#12201)\u003C\u002Fspan>            | AAAI 22                        | Weakly (Re-Organized Dataset) | VideoSwin-RGB            | X       | 97.32  | -      |\n| \u003Cspan id='42203_1'>[GCL](#12203)\u003C\u002Fspan>            | CVPR 22                        | Weakly (Re-Organized Dataset) | ResNext           | X       | 86.21  | -      |\n| \u003Cspan id='42203_2'>[GCL](#12203)\u003C\u002Fspan>            | CVPR 22                        | Un | ResNext           | X       | 78.93  | -      |\n\n## Performance Comparison on Avenue \n| Model                                                        | Reported on Conference\u002FJournal | Supervision                   | Feature                | End2End | AUC(%) |\n| ------------------------------------------------------------ | ------------------------------ | ----------------------------- | ---------------------- | ------- | ------ |\n| \u003Cspan id = \"51601\">[Conv-AE](#01601)\u003C\u002Fspan>                  | CVPR 16                        | Un                            | -                      | √       | 70.2   |\n| \u003Cspan id = \"51601-2\">[Conv-AE*](#01801)\u003C\u002Fspan>               | CVPR 18                        | Un                            | -                      | √       | 80.0   |\n| \u003Cspan id = \"51703\">[ConvLSTM-AE](#01703)\u003C\u002Fspan>              | ICME 17                        | Un                            | -                      | √       | 77.0   |\n| \u003Cspan id = \"51706\">[DeepAppearance](#01706)\u003C\u002Fspan>           | ICAIP 17                       | Un                            | -                      | √       | 84.6   |\n| \u003Cspan id = \"51705\">[Unmasking](#01705)\u003C\u002Fspan>                | ICCV 17                        | Un                            | 3D gradients+VGG conv5 | X       | 80.6   |\n| \u003Cspan id = \"51702\">[stacked-RNN](#01702)\u003C\u002Fspan>              | ICCV 17                        | Un                            | -                      | √       | 81.7   |\n| \u003Cspan id = \"51801\">[FramePred](#01801)\u003C\u002Fspan>                | CVPR 18                        | Un                            | -                      | √       | 85.1   |\n| \u003Cspan id = \"51901-1\">[Mem-AE](#01901)\u003C\u002Fspan>                 | ICCV 19                        | Un                            | -                      | √       | 83.3   |\n| \u003Cspan id = \"51904\">[Appearance-Motion Correspondence](#01904) \u003C\u002Fspan> | ICCV 19               | Un                            | -                      | √       | 86.9   |\n| \u003Cspan id = \"51902\">[FramePred*](#11902)\u003C\u002Fspan>               | IJCAI 19                       | Un                            | -                      | √       | 89.2   |\n| \u003Cspan id = \"52005\">[MNAD](#02005)\u003C\u002Fspan>                     | CVPR 20                        | Un                            | -                      | √       | 88.5   |\n| \u003Cspan id = \"52011\">[VEC](#02011)\u003C\u002Fspan>                      | ACM MM 20                      | Un                            | -                      | √       | 90.2   |\n| \u003Cspan id = '52014'>[ST-Graph](#02014)\u003C\u002Fspan>                 | ACM MM 20                      | Un                            | -                      | √       | 89.6   |\n| \u003Cspan id = '52013'>[CAC](#02013)\u003C\u002Fspan>                      | ACM MM 20                      | Un                            | -                      | √       | 87.0   |\n| \u003Cspan id='52101'>[AMMC](#02101)\u003C\u002Fspan>                       | AAAI 21                        | Un                            | -                      | √       | 86.6   |\n| \u003Cspan id='52102'>[SSMT](#02102)\u003C\u002Fspan>                       | CVPR 21                        | Un                            | -                      | √       | 91.5   |\n| \u003Cspan id='52103'>[HF2-VAD](#02103)\u003C\u002Fspan>                    | ICCV 21                        | Un                            | -                      | √       | 91.1   |\n| \u003Cspan id='52104'>[ROADMAP](#02104)\u003C\u002Fspan>                    | TNNLS 21                       | Un                            | -                      | √       | 88.3   |\n| \u003Cspan id='52105'>[AEP](#02105)\u003C\u002Fspan>                        | TNNLS 21                       | Un                            | -                      | √       | 90.2   |\n| \u003Cspan id='52201'>[Causal](#02201)\u003C\u002Fspan>                        | AAAI 22                       | Un                            | I3D-RGB                     | X       | 90.3   |\n| \u003Cspan id='52202'>[BDPN](#02202)\u003C\u002Fspan>                        | AAAI 22                       | Un                            | -                    |  √     | 90.3   |\n| \u003Cspan id = \"51801-1\">[MLEP](#11902)\u003C\u002Fspan>                   | IJCAI 19                       | 10% test vids with Video Anno | -                      | √       | 91.3   |\n| \u003Cspan id = \"51801-2\">[MLEP](#11902)\u003C\u002Fspan>                   | IJCAI 19                       | 10% test vids with Frame Anno | -                      | √       | 92.8   |\n\n## Performance Comparison on XD-Violence \n| Model                                                 | Reported on Conference\u002FJournal | Supervision              | Feature             | Encoder-based | 32 Segments | AP(%)  |\n| ----------------------------------------------------- | ------------------------------ | ------------------------ | ------------------- | ------- |-------------| ------ |\n| \u003Cspan id='61801'>[Sultani et al.](#11801)\u003C\u002Fspan>      | ECCV 2020 (reported by Wu)     | Weakly                   | I3D-RGB             | X       |   √         | 73.20  |     \n| \u003Cspan id='62003'>[Wu et al.](#12003)\u003C\u002Fspan>           | ECCV 2020                      | Weakly                   | C3D-RGB             | X       |   X         | 67.19  |\n| \u003Cspan id='62003-1'>[Wu et al.](#12003)\u003C\u002Fspan>         | ECCV 2020                      | Weakly                   | I3D-RGB+Audio       | X       |   X         | 78.64  |\n| \u003Cspan id = \"62102\">[RTFM](#12102)\u003C\u002Fspan>              | ICCV 2021                      | Weakly                   | I3D-RGB             | X       |   √         | 77.81  |\n| \u003Cspan id = \"62105\">[CRFD](#12105)\u003C\u002Fspan>              | TIP 2021                       | Weakly                   | I3D-RGB             | X       |   √         | 75.90  |\n| \u003Cspan id = \"62201_0\">[MSL](#12201)\u003C\u002Fspan>              | AAAI 2022                       | Weakly                   | C3D-RGB             | X       |    X         | 75.53  |\n| \u003Cspan id = \"62201_1\">[MSL](#12201)\u003C\u002Fspan>              | AAAI 2022                       | Weakly                   | I3D-RGB             | X       |    X         | 78.28  |\n| \u003Cspan id = \"62201_2\">[MSL](#12201)\u003C\u002Fspan>              | AAAI 2022                       | Weakly                   | VideoSwin-RGB             | X       |    X         | 78.59  |\n\n","# 令人惊叹的视频异常检测  [![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n用于视频异常检测的论文及已发布的代码集合。\n\n如有任何补充或错误，请提交 issue、pull request，或发送邮件至 `fjchange@hotmail.com`。\n\n## 最新更新\n- AAAI 2022\n- CVPR 2022\n\n## 数据集\n0. UMN [`下载链接`](http:\u002F\u002Fmha.cs.umn.edu\u002F)\n1. UCSD [`下载链接`](http:\u002F\u002Fwww.svcl.ucsd.edu\u002Fprojects\u002Fanomaly\u002Fdataset.html)\n2. 地铁出入口 [`下载链接`](http:\u002F\u002Fvision.eecs.yorku.ca\u002Fresearch\u002Fanomalous-behaviour-data\u002F)\n3. CUHK 大道 [`下载链接`](http:\u002F\u002Fwww.cse.cuhk.edu.hk\u002Fleojia\u002Fprojects\u002Fdetectabnormal\u002Fdataset.html)\n    - HD-Avenue \u003Cspan id = \"05\">[基于骨架](#01902)\u003C\u002Fspan>\n4. 上海理工 [`下载链接`](https:\u002F\u002Fsvip-lab.github.io\u002Fdataset\u002Fcampus_dataset.html)\n    - HD-ShanghaiTech \u003Cspan id = \"00\">[基于骨架](#01902)\u003C\u002Fspan>\n5. UCF-Crime（弱监督）\n    - UCFCrime2Local（UCF-Crime 的子集，但带有空间标注。）[`下载链接`](http:\u002F\u002Fimagelab.ing.unimore.it\u002FUCFCrime2Local)，\u003Cspan id = \"01\">[Ano-Locality](#21902)\u003C\u002Fspan>\n    - 空间时间标注 [`下载链接`](https:\u002F\u002Fgithub.com\u002Fxuzero\u002FUCFCrime_BoundingBox_Annotation) \u003Cspan id = \"02\">[背景偏置](#21901)\u003C\u002Fspan>\n6. 交通-火车\n7. Belleview\n8. 街景（WACV 2020）\u003Cspan id = \"03\">[街景](#02001)\u003C\u002Fspan>, [`下载链接`](https:\u002F\u002Fwww.merl.com\u002Fdemos\u002Fvideo-anomaly-detection)\n9. IITB-走廊（WACV 2020）\u003Cspan id = \"04\">[Rodrigurs.etl](#02002)\u003C\u002Fspan>\n10. XD-Violence（ECCV 2020）\u003Cspan id ='05'>[XD-Violence](#12003)\u003C\u002Fspan>[`下载链接`](https:\u002F\u002Froc-ng.github.io\u002FXD-Violence\u002F)\n11. ADOC（ACCV 2020）\u003Cspan id ='06'>[ADOC](#02012)\u003C\u002Fspan>[`下载链接`](http:\u002F\u002Fqil.uh.edu\u002Fmain\u002Fdatasets\u002F)\n12. UBnormal（CVPR 2022）\u003Cspan id='07'>[UBnormal] [`项目链接`](https:\u002F\u002Fgithub.com\u002Flilygeorgescu\u002FUBnormal) `开放集`\n\n__以下数据集涉及行车记录仪视频或监控视频中的交通事故预测__\n\n1. CADP [(汽车碰撞事故检测与预测)](https:\u002F\u002Fgithub.com\u002Fankitshah009\u002FCarCrash_forecasting_and_detection)\n2. DAD  [论文](https:\u002F\u002Fyuxng.github.io\u002Fchan_accv16.pdf), [`下载链接`](https:\u002F\u002Faliensunmin.github.io\u002Fproject\u002Fdashcam\u002F)\n3. A3D  [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.00618?)，[`下载链接`](https:\u002F\u002Fgithub.com\u002FMoonBlvd\u002Ftad-IROS2019)\n4. DADA  [`下载链接`](https:\u002F\u002Fgithub.com\u002FJWFangit\u002FLOTVS-DADA)\n5. DoTA   [`下载链接`](https:\u002F\u002Fgithub.com\u002FMoonBlvd\u002FDetection-of-Traffic-Anomaly)\n6. Iowa DOT [`下载链接`](https:\u002F\u002Fwww.aicitychallenge.org\u002F2018-ai-city-challenge\u002F)\n\n\n1. Driver_Anomaly [项目链接](https:\u002F\u002Fgithub.com\u002Fokankop\u002FDriver-Anomaly-Detection)\n-----\n## 无监督\n### 2016年\n1. \u003Cspan id = \"01601\">[Conv-AE]\u003C\u002Fspan> [学习视频序列中的时间规律](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FHasan_Learning_Temporal_Regularity_CVPR_2016_paper.pdf)，`CVPR 16`。[代码](https:\u002F\u002Fgithub.com\u002Fiwyoo\u002FTemporalRegularityDetector-tensorflow\u002Fblob\u002Fmaster\u002Fmodel.py)\n### 2017年\n1. \u003Cspan id = \"01701\">[Hinami.etl]\u003C\u002Fspan> [通过学习深度通用知识联合检测和重新计数异常事件](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FHinami_Joint_Detection_and_ICCV_2017_paper.pdf)，`ICCV 2017`。（可解释的 VAD）\n2. \u003Cspan id = \"01702\">[Stacked-RNN]\u003C\u002Fspan> [在堆叠 RNN 框架中重新审视基于稀疏编码的异常检测](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FLuo_A_Revisit_of_ICCV_2017_paper.pdf)，`ICCV 2017`。[代码](https:\u002F\u002Fgithub.com\u002FStevenLiuWen\u002FsRNN_TSC_Anomaly_Detection)\n3. \u003Cspan id = \"01703\">[ConvLSTM-AE]\u003C\u002Fspan> [利用卷积 LSTM 记住历史以进行异常检测](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8019325)，`ICME 2017`。[代码](https:\u002F\u002Fgithub.com\u002Fzachluo\u002Fconvlstm_anomaly_detection)\n4. \u003Cspan id = \"01704\">[Conv3D-AE]\u003C\u002Fspan> [用于视频异常检测的空间–时间自编码器](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3123266.3123451)，`ACM MM 17`。\n5. \u003Cspan id = \"01705\">[Unmasking]\u003C\u002Fspan> [揭开视频中的异常事件](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FIonescu_Unmasking_the_Abnormal_ICCV_2017_paper.pdf)，`ICCV 17`。\n6. \u003Cspan id = \"01706\">[DeepAppearance]\u003C\u002Fspan> [用于视频中异常行为检测的深度外观特征](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FRadu_Tudor_Ionescu\u002Fpublication\u002F320361315_Deep_Appearance_Features_for_Abnormal_Behavior_Detection_in_Video\u002Flinks\u002F5a469e9fa6fdcce1971b7258\u002FDeep-Appearance-Features-for-Abnormal-Behavior-Detection-in-Video.pdf)\n### 2018年\n1. \u003Cspan id = \"01801\">[FramePred]\u003C\u002Fspan> [用于异常检测的未来帧预测——一个新的基线](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLiu_Future_Frame_Prediction_CVPR_2018_paper.pdf)，`CVPR 2018`。[代码](https:\u002F\u002Fgithub.com\u002FStevenLiuWen\u002Fano_pred_cvpr2018)\n2. \u003Cspan id = \"01802\">[ALOOC]\u003C\u002Fspan> [用于新颖性检测的对抗式学习的一类分类器](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSabokrou_Adversarially_Learned_One-Class_CVPR_2018_paper.pdf)，`CVPR 2018`。[代码](https:\u002F\u002Fgithub.com\u002Fkhalooei\u002FALOCC-CVPR2018)\n3. [无需了解正常情况即可检测异常：一种用于无监督视频异常事件检测的两阶段方法](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3240508.3240615)，`ACM MM 18`。\n\n### 2019年\n1. \u003Cspan id = \"01901\">[Mem-AE]\u003C\u002Fspan> [记忆正常以检测异常：用于无监督异常检测的记忆增强深度自编码器](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FGong_Memorizing_Normality_to_Detect_Anomaly_Memory-Augmented_Deep_Autoencoder_for_Unsupervised_ICCV_2019_paper.pdf), `ICCV 2019`。[代码](https:\u002F\u002Fgithub.com\u002Fdonggong1\u002Fmemae-anomaly-detection)\n2. \u003Cspan id = \"01902\">[基于骨架]\u003C\u002Fspan> [学习视频中骨架轨迹的规律性以进行异常检测](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FMorais_Learning_Regularity_in_Skeleton_Trajectories_for_Anomaly_Detection_in_Videos_CVPR_2019_paper.pdf), `CVPR 2019`。[代码](https:\u002F\u002Fgithub.com\u002FRomeroBarata\u002Fskeleton_based_anomaly_detection)\n3. \u003Cspan id = \"01903\">[以目标为中心]\u003C\u002Fspan> [以目标为中心的自编码器和虚拟异常用于异常事件检测](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FIonescu_Object-Centric_Auto-Encoders_and_Dummy_Anomalies_for_Abnormal_Event_Detection_in_CVPR_2019_paper.pdf), `CVPR 2019`。\n4. \u003Cspan id = \"01904\">[外观-运动对应关系]\u003C\u002Fspan> [基于外观-运动对应关系的视频序列异常检测](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FNguyen_Anomaly_Detection_in_Video_Sequence_With_Appearance-Motion_Correspondence_ICCV_2019_paper.pdf), `ICCV 2019`。[代码](https:\u002F\u002Fgithub.com\u002Fnguyetn89\u002FAnomaly_detection_ICCV2019)\n5. \u003Cspan id = \"01905\">[AnoPCN]\u003C\u002Fspan>[AnoPCN：基于深度预测编码网络的视频异常检测](https:\u002F\u002Fpeople.cs.clemson.edu\u002F~jzwang\u002F20018630\u002Fmm2019\u002Fp1805-ye.pdf), ACM MM 2019。\n### 2020年\n1. \u003Cspan id = \"02001\">[街景]\u003C\u002Fspan> [街景：一个新的视频异常检测数据集及评估协议](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_WACV_2020\u002Fpapers\u002FRamachandra_Street_Scene_A_new_dataset_and_evaluation_protocol_for_video_WACV_2020_paper.pdf), `WACV 2020`。\n2. \u003Cspan id = \"02002\">[Rodrigurs.etl])\u003C\u002Fspan> [多时间尺度轨迹预测用于异常人类活动检测](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_WACV_2020\u002Fpapers\u002FRodrigues_Multi-timescale_Trajectory_Prediction_for_Abnormal_Human_Activity_Detection_WACV_2020_paper.pdf), `WACV 2020`。\n3. \u003Cspan id = \"02003\">[GEPC]\u003C\u002Fspan> [用于异常检测的图嵌入姿态聚类](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.11850.pdf), `CVPR 2020`。[代码](https:\u002F\u002Fgithub.com\u002Famirmk89\u002Fgepc)\n4. \u003Cspan id = \"02004\">[自训练]\u003C\u002Fspan> [用于端到端视频异常检测的自训练深度序数回归](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.06780.pdf), `CVPR 2020`。\n5. \u003Cspan id = \"02005\">[MNAD]\u003C\u002Fspan> [学习记忆引导的正常模式用于异常检测](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.13228.pdf), `CVPR 2020`。[代码](https:\u002F\u002Fcvlab.yonsei.ac.kr\u002Fprojects\u002FMNAD)\n6. \u003Cspan id = \"02006\">[持续AD]]\u003C\u002Fspan> [监控视频中异常检测的持续学习](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.07941),`CVPR 2020研讨会`。\n7. \u003Cspan id = \"02007\">[OGNet]\u003C\u002Fspan> [旧即是金：重新定义对抗学习的一类分类器训练范式](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZaheer_Old_Is_Gold_Redefining_the_Adversarially_Learned_One-Class_Classifier_Training_CVPR_2020_paper.pdf), `CVPR 2020`。[代码](https:\u002F\u002Fgithub.com\u002Fxaggi\u002FOGNet)\n8. \u003Cspan id = \"02008\">[任意-shot]\u003C\u002Fspan> [监控视频中的任意-shot顺序异常检测](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPRW_2020\u002Fpapers\u002Fw54\u002FDoshi_Any-Shot_Sequential_Anomaly_Detection_in_Surveillance_Videos_CVPRW_2020_paper.pdf),`CVPR 2020研讨会`。\n9. \u003Cspan id = \"02009\">[少-shot]\u003C\u002Fspan>[少-shot场景自适应异常检测](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.07843.pdf)`ECCV 2020亮点` [代码](https:\u002F\u002Fgithub.com\u002Fyiweilu3\u002FFew-shot-Scene-adaptive-Anomaly-Detection)\n10. \u003Cspan id = \"02010\">[CDAE]\u003C\u002Fspan>[用于视频异常检测的聚类驱动深度自编码器](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123600324.pdf)`ECCV 2020`\n11. \u003Cspan id = \"02011\">[VEC]\u003C\u002Fspan>[完形填空助力：通过学习完成视频事件实现有效的视频异常检测](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.11988)`ACM MM 2020口头报告` [代码](https:\u002F\u002Fgithub.com\u002Fyuguangnudt\u002FVEC_VAD)\n12. \u003Cspan id ='02012'>[ADOC]\u003C\u002Fspan>[校园一日——单摄像头事件异常检测数据集] `ACCV 2020`\n13. \u003Cspan id ='02013'>[CAC]\u003C\u002Fspan>[用于视频异常检测的聚类注意力对比](http:\u002F\u002Fweb.pkusz.edu.cn\u002Fadsp\u002Ffiles\u002F2020\u002F08\u002FCluster_Attention_Contrast_for_Video_Anomaly_Detection.pdf) `ACM MM 2020`\n14. \u003Cspan id ='02014'>[STC-图]\u003C\u002Fspan>[场景感知上下文推理用于视频中无监督异常事件检测](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3394171.3413887) `ACM MM 2020`\n\n### 2021年\n1. \u003Cspan id ='02101'>[AMCM]\u003C\u002Fspan>[用于视频异常检测的外观-运动记忆一致性网络](https:\u002F\u002Fwww.aaai.org\u002FAAAI21Papers\u002FAAAI-4120.CaiR.pdf) `AAAI 2021`\n2. \u003Cspan id='02102'>[SSMT，自监督多任务]\u003C\u002Fspan>[通过自监督和多任务学习进行视频异常检测](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.07491.pdf) `CVPR 2021`\n3. \u003Cspan id='02103'>[HF2-VAD]\u003C\u002Fspan>[一种混合视频异常检测框架，通过记忆增强的流重建和流引导的帧预测实现](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.06852.pdf)`ICCV 2021口头报告`\n4. \u003Cspan id='02104'>[ROADMAP]\u003C\u002Fspan>[通过多路径帧预测实现鲁棒的无监督视频异常检测](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2011.02763)`TNNLS 2021`\n5. \u003Cspan id='02105'>[AEP]\u003C\u002Fspan>[通过对抗性事件预测进行异常事件检测与定位](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9346050\u002F) `TNNLS 2021`\n\n### 2022年\n1. \u003Cspan id='02201'>[Casual]\u003C\u002Fspan>[从因果推断角度看无监督视频异常检测](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-37.LinX.pdf)`AAAI 2022`\n2. \u003Cspan id='02202'>[BDPN]\u003C\u002Fspan>[双向预测网络中的全面正则化用于视频异常检测](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-470.ChenC.pdf)`AAAI 2022`\n3. \u003Cspan id='02203'>[GCL]\u003C\u002Fspan>[生成式协作学习用于无监督视频异常检测](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2203.03962.pdf)`CVPR 2022`\n\n## 弱监督\n### 2018年\n1. \u003Cspan id = \"11801\">[Sultani.etl]\u003C\u002Fspan> [监控视频中的真实世界异常检测](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSultani_Real-World_Anomaly_Detection_CVPR_2018_paper.pdf), `CVPR 2018` [代码](https:\u002F\u002Fgithub.com\u002FWaqasSultani\u002FAnomalyDetectionCVPR2018)\n\n### 2019年\n1. \u003Cspan id = \"11901\">[GCN-Anomaly]\u003C\u002Fspan> [图卷积标签噪声清理器：训练即插即用的动作分类器用于异常检测](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FZhong_Graph_Convolutional_Label_Noise_Cleaner_Train_a_Plug-And-Play_Action_Classifier_CVPR_2019_paper.pdf),` CVPR 2019`, \n[代码](https:\u002F\u002Fgithub.com\u002Fjx-zhong-for-academic-purpose\u002FGCN-Anomaly-Detection)\n2. \u003Cspan id = \"11902\">[MLEP]\u003C\u002Fspan> [基于边缘学习嵌入预测的少样本视频异常检测](https:\u002F\u002Fpdfs.semanticscholar.org\u002Fe878\u002F6acbfabaf4938c9c8e2d3a15e0f110a1ec7f.pdf), `IJCAI 2019`[代码](https:\u002F\u002Fgithub.com\u002Fsvip-lab\u002FMLEP)。\n3. \u003Cspan id = \"11903\">[IBL]\u003C\u002Fspan> [带有互补内部袋损失的时序卷积网络用于弱监督异常检测](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8803657\u002F)。`ICIP 19`。\n4. \u003Cspan id = \"11904\">[Motion-Aware]\u003C\u002Fspan> [运动感知特征用于改进视频异常检测](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1907.10211)。`BMVC 19`。\n### 2020年\n1. \u003Cspan id = \"12001\">[Siamese]\u003C\u002Fspan> [使用孪生网络学习距离函数以定位视频中的异常](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.09189), `WACV 2020`。\n2. \u003Cspan id = \"12002\">[AR-Net]\u003C\u002Fspan> [基于中心引导判别学习的弱监督视频异常检测](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9102722),` ICME 2020`。[代码](https:\u002F\u002Fgithub.com\u002Fwanboyang\u002FAnomaly_AR_Net_ICME_2020)\n3. \u003Cspan id ='12003'>['XD-Violence']\u003C\u002Fspan> [不仅要观察，还要倾听：在弱监督下学习多模态暴力检测](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.04687.pdf) `ECCV 2020`\n4. \u003Cspan id ='12004'>[CLAWS]\u003C\u002Fspan> [CLAWS：利用正常性抑制的聚类辅助弱监督学习进行异常事件检测](https:\u002F\u002Fwww.ecva.net\u002Fpapers\u002Feccv_2020\u002Fpapers_ECCV\u002Fpapers\u002F123670358.pdf) `ECCV 2020`\n### 2021年\n1. \u003Cspan id=\"12101\">[MIST]\u003C\u002Fspan> [MIST：用于视频异常检测的多实例自训练框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.01633) `CVPR 2021` [项目页面](https:\u002F\u002Fkiwi-fung.win\u002F2021\u002F04\u002F28\u002FMIST\u002F)\n2. \u003Cspan id='12102'>[RTFM]\u003C\u002Fspan> [基于长短时程特征对比学习的弱监督视频异常检测](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2101.10030.pdf) `ICCV 2021`[代码](https:\u002F\u002Fgithub.com\u002Ftianyu0207\u002FRTFM)\n3. \u003Cspa id='12103'>[STAD]\u003C\u002Fspan>[监控视频中的弱监督时空异常检测](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2108.03825) `IJCAI 2021`\n4. \u003Cspan id='12104'>[WSAL]\u003C\u002Fspan>[从弱标签视频中定位异常](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.08944)`TIP 2021` [代码](https:\u002F\u002Fgithub.com\u002Fktr-hubrt\u002FWSAL)\n5. \u003Cspan id='12105'>[CRFD]\u003C\u002Fspan>[学习因果时序关系和特征判别用于异常检测](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9369126\u002F)`TIP 2021`\n### 2022年\n1. \u003Cspan id='12201'>[MSL]\u003C\u002Fspan>[基于Transformer的自训练多序列学习用于弱监督视频异常检测](https:\u002F\u002Fwww.aaai.org\u002FAAAI22Papers\u002FAAAI-6637.LiS.pdf)`AAAI 2022`\n\n## 监督学习\n### 2019年\n1. \u003Cspan id = \"21901\">[Background-Bias]\u003C\u002Fspan>[探索背景偏差用于监控视频中的异常检测](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3343031.3350998), `ACM MM 19`。\n2. \u003Cspan id = \"21902\">[Ano-Locality]\u003C\u002Fspan>[视频监控中的异常局部性](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1901.10364)。\n\n## 其他\n### 2020年\n1. \u003Cspan id =\"62001\">[Few-Shot]\u003C\u002Fspan>[少样本场景自适应异常检测](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.07843) `ECCV 2020`[代码](https:\u002F\u002Fgithub.com\u002Fyiweilu3\u002FFew-shot-Scene-adaptive-Anomaly-Detection)\n------\n## 综述 \u002F 调查\n1. 基于深度学习的无监督和半监督视频异常检测方法综述，J. Image, 2018年。[页面](https:\u002F\u002Fbeedotkiran.github.io\u002FVideoAnomaly.html)\n2. 异常检测中的深度学习：综述，[论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1901.03407.pdf)\n3. 智能监控中的视频异常检测 [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.00222.pdf)\n4. 单场景视频异常检测综述，`TPAMI 2020` [论文](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.05993.pdf)。\n\n\n## 书籍\n1. 离群点分析。Charu C. Aggarwal\n## 特定场景\n\n------\n\n一般来说，近年来的异常检测研究大多基于行人数据集（如UCSD、Avenue、ShanghaiTech等），或UCF-Crime（真实世界异常）。\n然而，也有一些研究专注于特定场景，如下所示。\n\n### 交通\nCVPR研讨会，AI City挑战赛系列。\n#### 第一人称交通\n​\t\t无监督的第一人称视频中的交通事故检测，IROS 2019。\n\n#### 驾驶\n\n​\t\t何时、何地、何事？用于驾驶视频异常检测的新数据集。[github](https:\u002F\u002Fgithub.com\u002FMoonBlvd\u002FDetection-of-Traffic-Anomaly)\n\n### 老人跌倒\n\n### 打斗\u002F暴力\n1. 监控视频中基于定位引导的格斗动作检测。ICME 2019。\n2. \n\n### 社会\u002F群体异常\n1. Social-BiGAT：使用自行车GAN和图注意力网络进行多模态轨迹预测，NeurIPS 2019。\n\n## 相关主题：\n1. 视频表示（无监督视频表示、重建、预测等）\n2. 目标检测\n3. 行人检测\n4. 骨骼检测\n5. 图神经网络\n6. GAN\n7. 动作识别 \u002F 时间动作定位\n8. 度量学习\n9. 标签噪声学习\n10. 跨模态\u002F多模态\n11. 字典学习\n12. 单类分类 \u002F 新奇检测 \u002F 分布外检测\n13. 动作识别。\n    - 事件中的人：一个用于复杂事件中以人为中心的视频分析的大规模基准测试。ACM MM 2020研讨会。\n\n## 性能评估方法\n1. AUC\n2. PR-AUC\n3. 分数差距\n4. 正常情况下的误报率，阈值为0.5（弱监督，由CVPR 18提出）\n\n正如Issue [#12](https:\u002F\u002Fgithub.com\u002Ffjchange\u002Fawesome-video-anomaly-detection\u002Fissues\u002F12)中所讨论的，如果论文提供了“宏AUC”，则以下报告的结果将被标记为“微AUC”，并附上`*`。\n\n## UCF-Crime 数据集上的性能对比\n| 模型                                               | 发表于会议\u002F期刊 | 监督方式 | 特征  | 编码器类型 | 32段 | AUC (%) | 常规场景下FAR@0.5 (%) |\n| --------------------------------------------------- | ------------------------------- | ---------- | -------- | ------- | ----------- | ------- | --------------------- |\n| \u003Cspan id = \"31801\">[Sultani.etl](#11801)\u003C\u002Fspan>     | CVPR 18                         | 弱监督     | C3D RGB  | X       | √           | 75.41   | 1.9                   |\n| \u003Cspan id = \"31903\">[IBL](#11903)\u003C\u002Fspan>             | ICIP 19                         | 弱监督     | C3D RGB  | X       | √           | 78.66   | -                     |\n| \u003Cspan id = \"31904\">[Motion-Aware](#11904)\u003C\u002Fspan>    | BMVC 19                         | 弱监督     | PWC 流   | X       | √           | 79.0    | -                     |\n| \u003Cspan id = \"31901\">[GCN-Anomaly](#11901)\u003C\u002Fspan>     | CVPR 19                         | 弱监督     | TSN RGB  | √       | X           | 82.12   | 0.1                   |\n| \u003Cspan id = '32013'>[ST-Graph](#02014)\u003C\u002Fspan>        | ACM MM 20                       | 无监督     | -        | √       | X           | 72.7    |                       |\n| \u003Cspan id = \"31902\">[Background-Bias](#21901)\u003C\u002Fspan> | ACM MM 19                       | 全监督     | NLN RGB  | √       | X           | 82.0    | -                     |\n| \u003Cspan id = \"31905\">[CLAWS](#12004)\u003C\u002Fspan>           | ECCV 20                         | 弱监督     | C3D RGB  | √       | X           | 83.03   | -                     |\n| \u003Cspan id = \"32101\">[MIST](#12101)\u003C\u002Fspan>            | CVPR 21                         | 弱监督     | I3D RGB  | √       | X           | 82.30   | 0.13                  |\n| \u003Cspan id = '32102'>[RTFM](#12102)\u003C\u002Fspan>            | ICCV 21                         | 弱监督     | I3D RGB  | X       | √           | 84.03   | -                     |\n| \u003Cspan id = '32104'>[WSAL](#12104)\u003C\u002Fspan>            | TIP 21                          | 弱监督     | I3D RGB  | X       | √           | 85.38   | -                     |\n| \u003Cspan id = '32104'>[CRFD](#12105)\u003C\u002Fspan>            | TIP 21                          | 弱监督     | I3D RGB  | X       | √           | 84.89   | -                     |\n| \u003Cspan id = '32201_1'>[MSL](#12201)\u003C\u002Fspan>            | AAAI 22                          | 弱监督     | C3D RGB  | √        | X           | 82.85   | -                     |\n| \u003Cspan id = '32201_2'>[MSL](#12202)\u003C\u002Fspan>            | AAAI 22                          | 弱监督     | I3D RGB  | √        | X           | 85.30   | -                     |\n| \u003Cspan id = '32201_3'>[MSL](#12201)\u003C\u002Fspan>            | AAAI 22                          | 弱监督     | VideoSwin-RGB  | √        | X           | 85.62   | -                     |\n| \u003Cspan id = '32203_1'>[GCL](#12203)\u003C\u002Fspan>            | CVPR 22                          | 弱监督     | ResNext  | √        | X           | 79.84   | -                     |  \n| \u003Cspan id = '32203_2'>[GCL](#12203)\u003C\u002Fspan>            | CVPR 22                          | 无监督     | ResNext  | √        | X           | 71.04   | -                     |\n\n## 上海科技大学数据集上的性能对比\n| 模型                                             | 发表会议\u002F期刊 | 监督方式                   | 特征            | 基于编码器 | AUC(%) | FAR@0.5 (%) |\n| ------------------------------------------------- | -------------- | -------------------------- | ---------------- | ------- | ------ | ----------- |\n| \u003Cspan id = \"41601\">[Conv-AE](#01601)\u003C\u002Fspan>       | CVPR 16        | 无                         | -                | √       | 60.85  | -           |\n| \u003Cspan id = \"41702\">[stacked-RNN](#01702)\u003C\u002Fspan>   | ICCV 17        | 无                         | -                | √       | 68.0   | -           |\n| \u003Cspan id = \"41801\">[FramePred](#01801)\u003C\u002Fspan>     | CVPR 18        | 无                         | -                | √       | 72.8   | -           |\n| \u003Cspan id = \"41902\">[FramePred*](#11902)\u003C\u002Fspan>    | IJCAI 19       | 无                         | -                | √       | 73.4   | -           |\n| \u003Cspan id = \"41901-1\">[Mem-AE](#01901)\u003C\u002Fspan>      | ICCV 19        | 无                         | -                | √       | 71.2   | -           |\n| \u003Cspan id = \"42005\">[MNAD](#02005)\u003C\u002Fspan>          | CVPR 20        | 无                         | -                | √       | 70.5   | -           |\n| \u003Cspan id = \"42011\">[VEC](#02011)\u003C\u002Fspan>           | ACM MM 20      | 无                         | -                | √       | 74.8   | -           |\n| \u003Cspan id ='42014'>[ST-Graph](#02014)\u003C\u002Fspan>       | ACM MM 20      | 无                         | -                | √       | 74.7   | -           |\n| \u003Cspan id = '42013'>[CAC](#02013)\u003C\u002Fspan>           | ACM MM 20      | 无                         | -                | √       | 79.3   |             |\n| \u003Cspan id='42101'>[AMMC](#02101)\u003C\u002Fspan>            | AAAI 21        | 无                         | -                | √       | 73.7   | -           |\n| \u003Cspan id='42102'>[SSMT](#02102)\u003C\u002Fspan>            | CVPR 21        | 无                         | -                | √       | 82.4   | -           |\n| \u003Cspan id='42103'>[HF2-VAD](#02103)\u003C\u002Fspan>         | ICCV 21        | 无                         | -                | √       | 76.2   | -           |\n| \u003Cspan id='42104'>[ROADMAP](#02104)\u003C\u002Fspan>         | TNNLS 21       | 无                         | -                | √       | 76.6   | -           |\n| \u003Cspan id='42202'>[BDPN](#02202)\u003C\u002Fspan>         | AAAI 22       | 无                         | -                | √       | 78.1   | -           |\n| \u003Cspan id = \"41902-1\">[MLEP](#11902)\u003C\u002Fspan>        | IJCAI 19       | 10% 测试视频带有视频标注 | -                | √       | 75.6   | -           |\n| \u003Cspan id = \"41902-2\">[MLEP](#11902)\u003C\u002Fspan>        | IJCAI 19       | 10% 测试视频带有帧级标注 | -                | √       | 76.8   | -           |\n| \u003Cspan id = \"42002-1\">[Sultani.etl](#12002)\u003C\u002Fspan> | ICME 2020      | 弱监督（重新组织的数据集） | C3D-RGB            | X       | 86.3   | 0.15        |\n| \u003Cspan id = \"42002-2\">[IBL](#12002)\u003C\u002Fspan>         | ICME 2020      | 弱监督（重新组织的数据集） | I3D-RGB            | X       | 82.5   | 0.10        |\n| \u003Cspan id = \"41901-2\">[GCN-Anomaly](#11901)\u003C\u002Fspan> | CVPR 19        | 弱监督（重新组织的数据集） | C3D-RGB            | √       | 76.44  | -           |\n| \u003Cspan id = \"41901-3\">[GCN-Anomaly](#11901)\u003C\u002Fspan> | CVPR 19        | 弱监督（重新组织的数据集） | TSN-Flow           | √       | 84.13  | -           |\n| \u003Cspan id = \"41901-4\">[GCN-Anomaly](#11901)\u003C\u002Fspan> | CVPR 19        | 弱监督（重新组织的数据集） | TSN-RGB            | √       | 84.44  | -           |\n| \u003Cspan id = \"42002\">[AR-Net](#12002)\u003C\u002Fspan>        | ICME 20        | 弱监督（重新组织的数据集） | I3D-RGB & I3D Flow | X       | 91.24  | 0.10        |\n| \u003Cspan id = \"42002\">[CLAWS](#12004)\u003C\u002Fspan>         | ECCV 20        | 弱监督（重新组织的数据集） | C3D-RGB            | √       | 89.67  |             |\n| \u003Cspan id='42101'>[MIST](#12101)\u003C\u002Fspan>            | CVPR 21        | 弱监督（重新组织的数据集） | I3D-RGB            | √       | 94.83  | 0.05        |\n| \u003Cspan id='42102'>[RTFM](#12102)\u003C\u002Fspan>            | ICCV 21        | 弱监督（重新组织的数据集） | I3D-RGB            | X       | 97.21  | -           |\n| \u003Cspan id='42102'>[CRFD](#12105)\u003C\u002Fspan>            | TIP 21         | 弱监督（重新组织的数据集） | I3D-RGB            | X       | 97.48  | -           |\n| \u003Cspan id='42201_0'>[MSL](#12201)\u003C\u002Fspan>            | AAAI 22        | 弱监督（重新组织的数据集） | C3D-RGB            | X       | 94.81  | -      |\n| \u003Cspan id='42201_1'>[MSL](#12201)\u003C\u002Fspan>            | AAAI 22        | 弱监督（重新组织的数据集） | I3D-RGB            | X       | 96.08  | -      |\n| \u003Cspan id='42201_1'>[MSL](#12201)\u003C\u002Fspan>            | AAAI 22        | 弱监督（重新组织的数据集） | VideoSwin-RGB            | X       | 97.32  | -      |\n| \u003Cspan id='42203_1'>[GCL](#12203)\u003C\u002Fspan>            | CVPR 22        | 弱监督（重新组织的数据集） | ResNext           | X       | 86.21  | -      |\n| \u003Cspan id='42203_2'>[GCL](#12203)\u003C\u002Fspan>            | CVPR 22        | 无 | ResNext           | X       | 78.93  | -      |\n\n## 大道上的性能对比\n| 模型                                                        | 发表于会议\u002F期刊 | 监督方式                   | 特征                | 端到端 | AUC(%) |\n| ------------------------------------------------------------ | ------------------------------ | ----------------------------- | ---------------------- | ------- | ------ |\n| \u003Cspan id = \"51601\">[Conv-AE](#01601)\u003C\u002Fspan>                  | CVPR 16                        | 无                            | -                      | √       | 70.2   |\n| \u003Cspan id = \"51601-2\">[Conv-AE*](#01801)\u003C\u002Fspan>               | CVPR 18                        | 无                            | -                      | √       | 80.0   |\n| \u003Cspan id = \"51703\">[ConvLSTM-AE](#01703)\u003C\u002Fspan>              | ICME 17                        | 无                            | -                      | √       | 77.0   |\n| \u003Cspan id = \"51706\">[DeepAppearance](#01706)\u003C\u002Fspan>           | ICAIP 17                       | 无                            | -                      | √       | 84.6   |\n| \u003Cspan id = \"51705\">[Unmasking](#01705)\u003C\u002Fspan>                | ICCV 17                        | 无                            | 3D梯度+VGG conv5 | X       | 80.6   |\n| \u003Cspan id = \"51702\">[stacked-RNN](#01702)\u003C\u002Fspan>              | ICCV 17                        | 无                            | -                      | √       | 81.7   |\n| \u003Cspan id = \"51801\">[FramePred](#01801)\u003C\u002Fspan>                | CVPR 18                        | 无                            | -                      | √       | 85.1   |\n| \u003Cspan id = \"51901-1\">[Mem-AE](#01901)\u003C\u002Fspan>                 | ICCV 19                        | 无                            | -                      | √       | 83.3   |\n| \u003Cspan id = \"51904\">[Appearance-Motion Correspondence](#01904) \u003C\u002Fspan> | ICCV 19               | 无                            | -                      | √       | 86.9   |\n| \u003Cspan id = \"51902\">[FramePred*](#11902)\u003C\u002Fspan>               | IJCAI 19                       | 无                            | -                      | √       | 89.2   |\n| \u003Cspan id = \"52005\">[MNAD](#02005)\u003C\u002Fspan>                     | CVPR 20                        | 无                            | -                      | √       | 88.5   |\n| \u003Cspan id = \"52011\">[VEC](#02011)\u003C\u002Fspan>                      | ACM MM 20                      | 无                            | -                      | √       | 90.2   |\n| \u003Cspan id='52014'>[ST-Graph](#02014)\u003C\u002Fspan>                 | ACM MM 20                      | 无                            | -                      | √       | 89.6   |\n| \u003Cspan id='52013'>[CAC](#02013)\u003C\u002Fspan>                      | ACM MM 20                      | 无                            | -                      | √       | 87.0   |\n| \u003Cspan id='52101'>[AMMC](#02101)\u003C\u002Fspan>                       | AAAI 21                        | 无                            | -                      | √       | 86.6   |\n| \u003Cspan id='52102'>[SSMT](#02102)\u003C\u002Fspan>                       | CVPR 21                        | 无                            | -                      | √       | 91.5   |\n| \u003Cspan id='52103'>[HF2-VAD](#02103)\u003C\u002Fspan>                    | ICCV 21                        | 无                            | -                      | √       | 91.1   |\n| \u003Cspan id='52104'>[ROADMAP](#02104)\u003C\u002Fspan>                    | TNNLS 21                       | 无                            | -                      | √       | 88.3   |\n| \u003Cspan id='52105'>[AEP](#02105)\u003C\u002Fspan>                        | TNNLS 21                       | 无                            | -                      | √       | 90.2   |\n| \u003Cspan id='52201'>[Causal](#02201)\u003C\u002Fspan>                        | AAAI 22                       | 无                            | I3D-RGB                     | X       | 90.3   |\n| \u003Cspan id='52202'>[BDPN](#02202)\u003C\u002Fspan>                        | AAAI 22                       | 无                            | -                    |  √     | 90.3   |\n| \u003Cspan id = \"51801-1\">[MLEP](#11902)\u003C\u002Fspan>                   | IJCAI 19                       | 10%测试视频带视频标注 | -                      | √       | 91.3   |\n| \u003Cspan id = \"51801-2\">[MLEP](#11902)\u003C\u002Fspan>                   | IJCAI 19                       | 10%测试视频带帧级标注 | -                      | √       | 92.8   |\n\n## XD-Violence上的性能对比\n| 模型                                                 | 发表于会议\u002F期刊 | 监督方式              | 特征             | 编码器为基础 | 32段 | AP(%)  |\n| ----------------------------------------------------- | ------------------------------ | ------------------------ | ------------------- | ------- |-------------| ------ |\n| \u003Cspan id='61801'>[Sultani et al.](#11801)\u003C\u002Fspan>      | ECCV 2020（由Wu报道）     | 弱监督                   | I3D-RGB             | X       |   √         | 73.20  |     \n| \u003Cspan id='62003'>[Wu et al.](#12003)\u003C\u002Fspan>           | ECCV 2020                      | 弱监督                   | C3D-RGB             | X       |   X         | 67.19  |\n| \u003Cspan id='62003-1'>[Wu et al.](#12003)\u003C\u002Fspan>         | ECCV 2020                      | 弱监督                   | I3D-RGB+音频       | X       |   X         | 78.64  |\n| \u003Cspan id = \"62102\">[RTFM](#12102)\u003C\u002Fspan>              | ICCV 2021                      | 弱监督                   | I3D-RGB             | X       |   √         | 77.81  |\n| \u003Cspan id = \"62105\">[CRFD](#12105)\u003C\u002Fspan>              | TIP 2021                       | 弱监督                   | I3D-RGB             | X       |   √         | 75.90  |\n| \u003Cspan id = \"62201_0\">[MSL](#12201)\u003C\u002Fspan>              | AAAI 2022                       | 弱监督                   | C3D-RGB             | X       |    X         | 75.53  |\n| \u003Cspan id = \"62201_1\">[MSL](#12201)\u003C\u002Fspan>              | AAAI 2022                       | 弱监督                   | I3D-RGB             | X       |    X         | 78.28  |\n| \u003Cspan id = \"62201_2\">[MSL](#12201)\u003C\u002Fspan>              | AAAI 2022                       | 弱监督                   | VideoSwin-RGB             | X       |    X         | 78.59  |","# awesome-video-anomaly-detection 快速上手指南\n\n`awesome-video-anomaly-detection` 是一个视频异常检测（Video Anomaly Detection, VAD）领域的开源资源合集，收录了相关的学术论文、代码实现以及数据集。本指南将帮助你快速了解如何利用该仓库获取资源并运行经典模型。\n\n## 环境准备\n\n在开始之前，请确保你的开发环境满足以下基本要求。由于该仓库包含多个不同年份和架构的模型（如 Conv-AE, Mem-AE, Sultani.etl 等），具体依赖可能因模型而异，但通用环境如下：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04\u002F20.04) 或 macOS。Windows 用户建议使用 WSL2。\n*   **Python**: 3.6 - 3.9 (根据具体模型代码要求，较新的模型通常需要 3.8+)。\n*   **深度学习框架**: PyTorch (1.7+) 或 TensorFlow (1.15\u002F2.x)，视具体选择的模型代码库而定。\n*   **硬件**: 推荐使用 NVIDIA GPU (显存建议 8GB 以上) 以加速训练和推理。\n*   **其他依赖**:\n    *   `git`: 用于克隆仓库。\n    *   `ffmpeg`: 用于视频预处理。\n    *   `opencv-python`: 用于图像读取和处理。\n\n**国内加速建议**：\n*   **Git 克隆**: 如果访问 GitHub 速度慢，可使用国内镜像源（如 Gitee 上的同步镜像，若有）或配置 Git 代理。\n*   **Python 包安装**: 推荐使用清华源或阿里源安装依赖。\n    ```bash\n    pip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n\n## 安装步骤\n\n由于这是一个资源列表（Awesome List）而非单一的可执行软件，**没有统一的安装命令**。你需要根据需求选择具体的论文模型进行部署。以下是通用的操作流程：\n\n### 1. 克隆资源列表仓库\n首先获取该合集的索引信息：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Ffjchange\u002Fawesome-video-anomaly-detection.git\ncd awesome-video-anomaly-detection\n```\n\n### 2. 选择并克隆具体模型代码\n浏览 README 中的 **Unsupervised** (无监督) 或 **Weakly-Supervised** (弱监督) 章节，找到你感兴趣的模型（例如 2019 年的经典模型 `Mem-AE` 或 2018 年的 `Sultani.etl`）。\n\n点击对应条目中的 `[Code]` 链接跳转到具体代码仓库。以 `Mem-AE` 为例：\n\n```bash\n# 示例：克隆 Mem-AE 模型代码\ngit clone https:\u002F\u002Fgithub.com\u002Fdonggong1\u002Fmemae-anomaly-detection.git\ncd memae-anomaly-detection\n```\n\n### 3. 安装模型特定依赖\n进入具体模型目录后，通常会有独立的 `requirements.txt` 或 `setup.py`。\n\n```bash\n# 创建虚拟环境 (推荐)\npython -m venv venv\nsource venv\u002Fbin\u002Factivate  # Windows 使用: venv\\Scripts\\activate\n\n# 安装依赖 (优先使用国内源)\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 4. 准备数据集\n根据 README 中的 **Datasets** 部分下载所需数据（如 UCSD Ped2, ShanghaiTech, UCF-Crime 等）。\n*   **注意**: 大部分数据集需要手动下载并解压到项目指定的目录（通常为 `data\u002F` 或 `dataset\u002F`）。\n*   **示例**: 对于 `Mem-AE`，通常需要下载 UCSD Ped2 数据集，并按照 `README` 指示转换为帧图像格式。\n\n## 基本使用\n\n以下以无监督学习经典模型 **Mem-AE** (ICCV 2019) 为例，展示最基础的训练与测试流程。其他模型的使用逻辑类似，请参考各自仓库的说明。\n\n### 1. 数据预处理\n大多数 VAD 模型需要将视频转换为图像帧序列。如果仓库未提供自动脚本，需手动处理：\n```bash\n# 示例：使用 ffmpeg 将视频拆分为帧 (假设输入视频为 test.avi)\nmkdir frames\nffmpeg -i test.avi -qscale:v 2 frames\u002Fframe_%04d.jpg\n```\n\n### 2. 训练模型\n在配置好数据集路径后，运行训练脚本。\n```bash\n# 示例命令 (具体参数请参照该模型 README)\npython main.py --dataset ucfd_ped2 --mode train --gpu 0\n```\n*训练完成后，模型权重通常会保存在 `checkpoint\u002F` 或 `models\u002F` 目录下。*\n\n### 3. 推理与异常检测\n使用训练好的模型对测试视频进行检测，生成异常分数曲线或标记后的视频。\n```bash\n# 示例命令\npython main.py --dataset ucfd_ped2 --mode test --weights checkpoint\u002Fbest_model.pth --video_path data\u002Ftest_videos\u002F01.avi\n```\n\n### 4. 结果查看\n*   **输出文件**: 通常会生成包含异常分数的 `.txt` 文件或带有可视化框的 `.avi\u002F.mp4` 视频。\n*   **评估**: 如果拥有标注数据，可计算 AUC (Area Under Curve) 指标来评估性能。\n\n---\n**提示**: 该仓库涵盖了从 2016 年到 2022+ 的众多算法。初学者建议从 **Unsupervised -> 2019 -> Mem-AE** 或 **Weakly-Supervised -> 2018 -> Sultani.etl** 入手，这两个模型代码成熟度较高，社区支持较好，适合快速复现和理解视频异常检测的基本原理。","某智慧园区安防团队正试图升级监控系统，从传统的人脸识别转向自动检测打架、跌倒或非法闯入等视频异常行为。\n\n### 没有 awesome-video-anomaly-detection 时\n- **选型迷茫**：面对海量的学术论文，团队难以快速筛选出适合园区场景（如拥挤人群中的突发冲突）的成熟算法，往往盲目尝试过时的模型。\n- **数据适配困难**：缺乏统一的数据集指引，团队花费数周时间清洗自有监控视频，却不知该参考 UCF-Crime 还是 ShanghaiTech 等标准数据集进行预训练。\n- **复现成本高昂**：找到的开源代码往往缺少依赖说明或无法运行，工程师需耗费大量精力调试环境，导致项目迟迟无法进入验证阶段。\n- **性能评估缺失**：由于缺乏权威的性能对比数据，团队无法判断当前模型的误报率是否达标，只能凭感觉调整参数。\n\n### 使用 awesome-video-anomaly-detection 后\n- **精准技术选型**：团队直接查阅收录的最新 CVPR\u002FAAAI 论文及代码，快速锁定了针对“弱监督学习”的先进模型，完美匹配园区标注数据少的痛点。\n- **高效数据对接**：利用列表中整理的 UCF-Crime 和 XD-Violence 等数据集链接，迅速构建了高质量的测试基准，大幅缩短了数据准备周期。\n- **开箱即用验证**：通过提供的官方代码仓库链接，团队在两天内成功复现了 SOTA（最先进）模型，并立即在园区实测视频中跑通了异常检测流程。\n- **科学决策依据**：参考清单中的性能对比数据，团队量化评估出模型在特定场景下的准确率提升空间，从而制定了合理的优化路线图。\n\nawesome-video-anomaly-detection 将原本需要数月的调研与试错过程压缩至数天，让安防团队能专注于业务逻辑而非重复造轮子。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ffjchange_awesome-video-anomaly-detection_9a042626.png","fjchange","Jia-Chang Feng","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ffjchange_7cedb402.jpg","(2022.7-Now) DJI engineer, (2019-2022.6) SYSU CVer. Focusing in Semantic Perception and Video Understanding.","Dji Innovation, Tencent, Sun Yat-sen University","Shenzhen, China","fjchange@hotmail.com",null,"kiwi-fung.win","https:\u002F\u002Fgithub.com\u002Ffjchange",664,104,"2026-04-01T13:03:34",5,"","未说明",{"notes":93,"python":91,"dependencies":94},"该仓库是一个视频异常检测（Video Anomaly Detection）的论文和代码合集列表，并非单一的独立软件工具。它列出了多个不同年份（2016-2022）的研究项目，每个项目都有独立的代码仓库链接和特定的环境依赖（部分基于 TensorFlow，部分基于 PyTorch）。因此，没有统一的操作系统、GPU、内存或 Python 版本要求。用户需根据具体想要运行的某篇论文对应的子项目代码库，查阅其各自的 README 以获取具体的运行环境需求。",[],[14,13,52],[97,98,99,100,101,102,103,104,105],"anomalies","papers","awesome","video-anomaly-detection","novelty-detection","surveillance-videos","abnormal-events","detection","deep-learning","2026-03-27T02:49:30.150509","2026-04-06T07:16:12.309847",[109,114,119,124,129,134,139],{"id":110,"question_zh":111,"answer_zh":112,"source_url":113},9676,"在视频异常检测中，表格中标记的“端到端（end-to-end）”具体定义是什么？","在此项目中，“端到端”指的是模型在**测试阶段**只有一个步骤（one-stage），而不是指训练阶段。这与“基于编码器（Encoder-based）”和“与编码器无关（Encoder-agnostic）”不同，后者涉及是否训练特征编码器。此外，这也与预测粒度有关：某些非端到端方法（如 Sultani）在测试时需要将输出重复多次以恢复时间分辨率，而真正的端到端或细粒度方法（如 GCN-anomaly, MIST）输出的异常分数分辨率与输入视频一致，无需额外操作。","https:\u002F\u002Fgithub.com\u002Ffjchange\u002Fawesome-video-anomaly-detection\u002Fissues\u002F9",{"id":115,"question_zh":116,"answer_zh":117,"source_url":118},9677,"在使用 10-crop I3D 特征提取时，如何获取每个实例（instance）的标签？特别是在推理阶段没有实例级标签时如何计算 AUC？","对于异常视频，通常没有精确到每个 instance 的标签。常用的推理策略（如 RTFM 做法）是：计算一个视频中所有 instance 的得分并求平均值，将该平均分作为整个视频的得分，然后与视频级别的标签（正常为 0，异常为 1）计算 AUC。需要注意的是，这种方法假设异常视频中异常 instance 占一定比例，否则平均后异常分数可能被稀释。关于 10-crop 的具体映射，通常是将 instance 映射到 clip，再映射到帧（例如每个 clip 包含 16 帧），最终按帧级别或视频级别评估。","https:\u002F\u002Fgithub.com\u002Ffjchange\u002Fawesome-video-anomaly-detection\u002Fissues\u002F11",{"id":120,"question_zh":121,"answer_zh":122,"source_url":123},9678,"性能对比表中的 AUC 指标是 Micro-AUC 还是 Macro-AUC？两者有何区别？","该项目已更新表格以区分这两种指标。大多数论文使用的是**Micro-AUC**（更常用）。部分论文（如 SSMT）报告的是**Macro-AUC**，其计算方式不同，直接比较会导致不公平。维护者已承诺尽量统一报告为 Micro-AUC 以确保对比的公正性。用户在参考数据时应注意区分论文原文使用的具体指标类型。","https:\u002F\u002Fgithub.com\u002Ffjchange\u002Fawesome-video-anomaly-detection\u002Fissues\u002F12",{"id":125,"question_zh":126,"answer_zh":127,"source_url":128},9679,"VAD 和 MIL VAD 分别代表什么？ShanghaiTech 数据集只有正常视频用于训练，MIL VAD 的结果是如何得出的？","VAD 是 Video Anomaly Detection（视频异常检测）的缩写。MIL VAD 指的是基于多实例学习（Multiple Instance Learning）的视频异常检测方法（如 Sultani CVPR 2018）。关于 ShanghaiTech 数据集，虽然训练集主要包含正常视频，但 MIL 方法利用弱监督信号（视频级标签）进行训练。表格中报告的 MIL VAD 在 ShanghaiTech 上的性能数据（如 AUC 86.3）实际上是引用自后续论文（如 AR-Net, ICME 2020）中复现或重新组织数据集后的结果，而非原始 Sultani 论文直接在纯正常训练集上的原始结果。","https:\u002F\u002Fgithub.com\u002Ffjchange\u002Fawesome-video-anomaly-detection\u002Fissues\u002F1",{"id":130,"question_zh":131,"answer_zh":132,"source_url":133},9680,"Sultani 等方法在 ShanghaiTech 数据集上使用的特征提取器是 I3D 还是 C3D？","这是一个常见的混淆点。虽然表格中可能标记为 I3D，但在复现 Sultani 方法用于 ShanghaiTech 数据集的特定研究（如 AR-Net 论文）中，作者明确指出他们使用了预训练的**C3D**特征来训练和测试，而非 I3D。因此在引用具体数值时，需确认该结果是基于 C3D 特征复现的，以免产生特征类型不匹配的误解。","https:\u002F\u002Fgithub.com\u002Ffjchange\u002Fawesome-video-anomaly-detection\u002Fissues\u002F2",{"id":135,"question_zh":136,"answer_zh":137,"source_url":138},9681,"如何获取 Subway Entrance \u002F Exit 数据集？官方链接似乎只提供了 Ground Truth。","Subway 数据集的视频文件无法直接下载。根据官方说明（where-to-get-the-video.txt），获取视频的唯一方式是直接联系数据集作者 Amit Adam (amitadam@yahoo.com) 发送邮件申请。提供的链接中仅包含标注文件（GT\u002F），不包含视频源文件。","https:\u002F\u002Fgithub.com\u002Ffjchange\u002Fawesome-video-anomaly-detection\u002Fissues\u002F5",{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},9682,"该项目是否有 Colab 版本以便快速运行？","目前该项目没有提供 Colab 版本。用户需要在本地环境或自己的服务器上进行配置和运行。","https:\u002F\u002Fgithub.com\u002Ffjchange\u002Fawesome-video-anomaly-detection\u002Fissues\u002F6",[]]