[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-sbharadwajj--awesome-zero-shot-learning":3,"tool-sbharadwajj--awesome-zero-shot-learning":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":78,"owner_twitter":78,"owner_website":81,"owner_url":82,"languages":78,"stars":83,"forks":84,"last_commit_at":85,"license":78,"difficulty_score":86,"env_os":87,"env_gpu":88,"env_ram":88,"env_deps":89,"category_tags":92,"github_topics":93,"view_count":23,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":98,"updated_at":99,"faqs":100,"releases":101},3471,"sbharadwajj\u002Fawesome-zero-shot-learning","awesome-zero-shot-learning","A curated list of papers, code and resources pertaining to zero shot learning","awesome-zero-shot-learning 是一个精心整理的零样本学习（Zero-Shot Learning, ZSL）资源合集，旨在为相关领域的探索者提供一站式导航。它主要解决了研究人员和开发者在入门或深入研究时，难以高效获取高质量论文、标准数据集对比结果及可复现代码的痛点。通过系统化地收录从经典方法到 CVPR、ICLR、ECCV 等顶级会议的最新成果（如 TF-vaegan、VSC、SGMA 等模型），该资源库极大地降低了文献调研与技术复现的门槛。\n\n这份清单特别适合人工智能领域的研究人员、算法工程师以及高校学生使用。无论是希望快速了解零样本分类前沿动态的学者，还是寻找基准代码以开展实验的开发者，都能从中获益。其独特亮点在于不仅提供了论文的原文链接，还广泛附带了官方实现代码仓库，并涵盖了归纳式与直推式等多种学习设定下的关键资源。此外，项目保持活跃更新，欢迎社区共同贡献，确保了内容的时效性与实用性，是进入零样本学习领域不可或缺的参考指南。","# Zero-Shot Learning\n\n[![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n\nA curated list of resources including papers, comparitive results on standard datasets and relevant links pertaining to zero-shot learning.  \n\n## Contributing\n\nContributions are welcome. Please see the [issue](https:\u002F\u002Fgithub.com\u002Fchichilicious\u002Fawesome-zero-shot-learning\u002Fissues\u002F2) which lists the things which are planned to be included in this repo. If you wish to contribute within these boundaries, feel free to send a PR. If you have suggestions for new sections to be included, please raise an issue and discuss before sending a PR.\n\n## Table of Contents\n+ [Papers](#Papers)\n+ [Datasets](#Datasets)\n+ [Starter code for ZSL](#Starter-Code)\n+ [Other Resources](#Other-resources)\n\n\n## Zero-Shot Object Classification\n\n### Papers\n\n#### CVPR 2021\n+ Liu Bo, Qiulei Dong and Zhanyi Hu. \"Hardness Sampling for Self-Training Based Transductive Zero-Shot Learning\" [[pdf]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FBo_Hardness_Sampling_for_Self-Training_Based_Transductive_Zero-Shot_Learning_CVPR_2021_paper.pdf)\n+ Zongyan Han, Zhenyong Fu, Shuo Chen and Jian Yang \"Contrastive Embedding for Generalized Zero-Shot Learning\" [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.16173.pdf) [[code]](https:\u002F\u002Fgithub.com\u002FHanzy1996\u002FCE-GZSL)\n+ Yang Liu, Lei Zhou, Xiao Bai, Yifei Huang, Lin Gu, Jun Zhou, Tatsuya Harada. \"Goal-Oriented Gaze Estimation for Zero-Shot Learning\" [[pdf]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLiu_Goal-Oriented_Gaze_Estimation_for_Zero-Shot_Learning_CVPR_2021_paper.pdf) [[code]](https:\u002F\u002Fgithub.com\u002Fosierboy\u002FGEM-ZSL)\n+ \"Learning Graph Embeddings for Compositional Zero-shot Learning\" [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.01987.pdf) [[code]](https:\u002F\u002Fgithub.com\u002FExplainableML\u002Fczsl) \n\n#### ICLR 2021\n+ Lu Liu, Tianyi Zhou, Guodong Long, Jing Jiang, Xuanyi Dong, Chengqi Zhang. \"Isometric Propagation Network for Generalized Zero-Shot Learning.\" [[pdf]](https:\u002F\u002Fopenreview.net\u002Fpdf?id=-mWcQVLPSPy)\n\n#### ECCV 2020 \n+ **TF-vaegan**: Sanath Narayan\u003Csup>\\*\u003C\u002Fsup> , Akshita Gupta\u003Csup>\\*\u003C\u002Fsup> , Fahad Shahbaz Khan, Cees G. M. Snoek, Ling Shao. \"Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification.\" ECCV (2020). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.07833.pdf) [[code]](https:\u002F\u002Fgithub.com\u002Fakshitac8\u002Ftfvaegan).\n+ **LsrGAN**: Maunil R Vyas, Hemanth Venkateswara, and Sethuraman Panchanathan. \"Leveraging Seen and Unseen Semantic Relationships for Generative Zero-Shot Learning.\" ECCV (2020). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.09549.pdf). \n+ Xingyu Chen, Xuguang Lan, Fuchun Sun, and Nanning Zheng. \"A Boundary Based Out-of-Distribution Classifier for Generalized Zero-Shot Learning.\" ECCV (2020). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.04872.pdf).\n+ \"Region Graph Embedding Network for Zero-Shot Learning. \" ECCV SPOTLIGHT(2020). [[pdf]](). \n\n#### ICLR 2020\n+ Tristan Sylvain, Linda Petrini, Devon Hjelm. \"Locality and Compositionality in Zero-Shot Learning.\" ICLR (2020). [[pdf]](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Hye_V0NKwr).\n\n#### NeurIPS 2019\n+ **VSC:** Ziyu Wan, Dongdong Chen, Yan Li, Xingguang Yan, Junge Zhang, Yizhou Yu, Jing Liao. \"Transductive Zero-Shot Learning with Visual Structure Constraint.\" NeurIPS (2019). [[pdf]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9188-transductive-zero-shot-learning-with-visual-structure-constraint.pdf) [[code]](https:\u002F\u002Fgithub.com\u002Fraywzy\u002FVSC).\n+ Hyeonwoo Yu, Beomhee Lee. \"Zero-shot Learning via Simultaneous Generating and Learning.\" NeurIPS (2019). [[pdf]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8300-zero-shot-learning-via-simultaneous-generating-and-learning.pdf).\n+ Jian Ni, Shanghang Zhang, Haiyong Xie. \"Dual Adversarial Semantics-Consistent Network for Generalized Zero-Shot Learning.\" NeurIPS (2019). [[pdf]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8846-dual-adversarial-semantics-consistent-network-for-generalized-zero-shot-learning.pdf).\n+ **SGMA:** Yizhe Zhu, Jianwen Xie, Zhiqiang Tang, Xi Peng, Ahmed Elgammal. NeurIPS (2019). [[pdf]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9632-semantic-guided-multi-attention-localization-for-zero-shot-learning.pdf)\n\n#### ICCV 2019\n+ **CIZSL:** Mohamed Elhoseiny, Mohamed Elfeki. \"Creativity Inspired Zero-Shot Learning.\" ICCV (2019). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FElhoseiny_Creativity_Inspired_Zero-Shot_Learning_ICCV_2019_paper.pdf) [[code]](https:\u002F\u002Fgithub.com\u002Fmhelhoseiny\u002FCIZSL)\n+ **LFGAA+SA:** Yang Liu, Jishun Guo, Deng Cai, Xiaofei He. \"Attribute Attention for Semantic Disambiguation in Zero-Shot Learning.\" ICCV (2019). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FLiu_Attribute_Attention_for_Semantic_Disambiguation_in_Zero-Shot_Learning_ICCV_2019_paper.pdf) [[code]](https:\u002F\u002Fgithub.com\u002FZJULearning\u002FAttentionZSL)\n+ **TCN:** Huajie Jiang, Ruiping Wang, Shiguang Shan, Xilin Chen. \"Transferable Contrastive Network for Generalized Zero-Shot Learning.\" ICCV (2019). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FJiang_Transferable_Contrastive_Network_for_Generalized_Zero-Shot_Learning_ICCV_2019_paper.pdf)\n+ **GXE:** Kai Li, Martin Renqiang Min, Yun Fu. \"Rethinking Zero-Shot Learning: A Conditional Visual Classification Perspective.\" ICCV (2019). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FLi_Rethinking_Zero-Shot_Learning_A_Conditional_Visual_Classification_Perspective_ICCV_2019_paper.pdf)\n+ Yizhe Zhu1, Jianwen Xie, Bingchen Liu, Ahmed Elgammal. \"Learning Feature-to-Feature Translator by Alternating Back-Propagation for Generative Zero-Shot Learning.\" ICCV (2019). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FZhu_Learning_Feature-to-Feature_Translator_by_Alternating_Back-Propagation_for_Generative_Zero-Shot_Learning_ICCV_2019_paper.pdf) [[code]](https:\u002F\u002Fgithub.com\u002FEthanZhu90\u002FZSL_ABP)\n+ Yannick Le Cacheux, Herve Le Borgne, Michel Crucianu \"Modeling Inter and Intra-Class Relations in the Triplet Loss for Zero-Shot Learning.\" ICCV (2019). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FLe_Cacheux_Modeling_Inter_and_Intra-Class_Relations_in_the_Triplet_Loss_for_ICCV_2019_paper.pdf)\n\n#### CVPR 2019\n+ **CADA-VAE:** Edgar Schönfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, Zeynep Akata. \"Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders.\" CVPR (2019). [[pdf]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.01784) [[code]](https:\u002F\u002Fgithub.com\u002Fedgarschnfld\u002FCADA-VAE-PyTorch)\n+ **GDAN:** He Huang, Changhu Wang, Philip S. Yu, Chang-Dong Wang. \"Generative Dual Adversarial Network for Generalized Zero-shot Learning.\" CVPR (2019). [[pdf]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.04857) [[code]](https:\u002F\u002Fgithub.com\u002Fstevehuanghe\u002FGDAN)\n+ **DeML:** Binghui Chen, Weihong Deng. \"Hybrid-Attention based Decoupled Metric Learning for Zero-Shot Image Retrieval.\" CVPR (2019). [[pdf]](http:\u002F\u002Fwww.bhchen.cn\u002Fpaper\u002Fcvpr19.pdf) [[code]](https:\u002F\u002Fgithub.com\u002Fchenbinghui1\u002FHybrid-Attention-based-Decoupled-Metric-Learning)\n+ **Gzsl-VSE:** Pengkai Zhu, Hanxiao Wang, Venkatesh Saligrama. \"Generalized Zero-Shot Recognition based on Visually Semantic Embedding.\" CVPR (2019). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.07993.pdf)\n+ **LisGAN:** Jingjing Li, Mengmeng Jin, Ke Lu, Zhengming Ding, Lei Zhu, Zi Huang. \"Leveraging the Invariant Side of Generative Zero-Shot Learning.\" CVPR (2019). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.04092.pdf) [[code]](https:\u002F\u002Fgithub.com\u002Flijin118\u002FLisGAN)\n+ **DGP:** Michael Kampffmeyer, Yinbo Chen, Xiaodan Liang, Hao Wang, Yujia Zhang, Eric P. Xing. \"Rethinking Knowledge Graph Propagation for Zero-Shot Learning.\" CVPR (2019). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.11724.pdf) [[code]](https:\u002F\u002Fgithub.com\u002Fcyvius96\u002FDGP)\n+ **DAZL:** Yuval Atzmon, Gal Chechik. \"Domain-Aware Generalized Zero-Shot Learning.\" CVPR (2019). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1812.09903.pdf)\n+ **PrEN:** Meng Ye, Yuhong Guo. \"Progressive Ensemble Networks for Zero-Shot Recognition.\" CVPR (2019). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.07473.pdf)\n+ Tristan Hascoet, Yasuo Ariki, Tetsuya Takiguchi. \"On Zero-Shot Learning of generic objects.\" CVPR (2019). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.04957.pdf) [[code]](https:\u002F\u002Fgithub.com\u002FTristHas\u002FGOZ)\n+ **SABR-T:** Akanksha Paul, Naraynan C Krishnan, Prateek Munjal. \"Semantically Aligned Bias Reducing Zero Shot Learning.\" CVPR (2019). [[pdf]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.07659)\n+ **AREN:** Guo-Sen Xie, Li Liu, Xiaobo Jin, Fan Zhu, Zheng Zhang, Jie Qin, Yazhou Yao, Ling Shao. \"Attentive Region Embedding Network for Zero-shot Learning.\" CVPR (2019). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FXie_Attentive_Region_Embedding_Network_for_Zero-Shot_Learning_CVPR_2019_paper.pdf) [[code]](https:\u002F\u002Fgithub.com\u002Fgsx0\u002FAttentive-Region-Embedding-Network-for-Zero-shot-Learning)\n+ Zhengming Ding, Hongfu Liu. \"Marginalized Latent Semantic Encoder for Zero-Shot Learning.\" CVPR (2019). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FDing_Marginalized_Latent_Semantic_Encoder_for_Zero-Shot_Learning_CVPR_2019_paper.pdf)\n+ **PQZSL:** Jin Li, Xuguang Lan, Yang Liu, Le Wang, Nanning Zheng. \"Compressing Unknown Classes with Product Quantizer for Efficient Zero-Shot Classification.\" CVPR (2019). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FLi_Compressing_Unknown_Images_With_Product_Quantizer_for_Efficient_Zero-Shot_Classification_CVPR_2019_paper.pdf)\n+ Mert Bulent Sariyildiz, Ramazan Gokberk Cinbis. \"Gradient Matching Generative Networks for Zero-Shot Learning.\" CVPR (2019). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FSariyildiz_Gradient_Matching_Generative_Networks_for_Zero-Shot_Learning_CVPR_2019_paper.pdf)\n+ Bin Tong, Chao Wang, Martin Klinkigt, Yoshiyuki Kobayashi, Yuuichi Nonaka. \"Hierarchical Disentanglement of Discriminative Latent Features for Zero-shot Learning.\" CVPR (2019). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FTong_Hierarchical_Disentanglement_of_Discriminative_Latent_Features_for_Zero-Shot_Learning_CVPR_2019_paper.pdf)\n\n#### NeurIPS 2018\n+ **DCN:** Shichen Liu, Mingsheng Long, Jianmin Wang, Michael I. Jordan.\"Generalized Zero-Shot Learning with Deep Calibration Network\" NeurIPS (2018). [[pdf]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7471-generalized-zero-shot-learning-with-deep-calibration-network.pdf)\n+ **S2GA:** Yunlong Yu, Zhong Ji, Yanwei Fu, Jichang Guo, Yanwei Pang, Zhongfei (Mark) Zhang.\"Stacked Semantics-Guided Attention Model for Fine-Grained Zero-Shot Learning.\" NeurIPS (2018). [[pdf]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7839-stacked-semantics-guided-attention-model-for-fine-grained-zero-shot-learning.pdf)\n+ **DIPL:** An Zhao, Mingyu Ding, Jiechao Guan, Zhiwu Lu, Tao Xiang, Ji-Rong Wen \"Domain-Invariant Projection Learning for Zero-Shot Recognition.\" NeurIPS (2018). [[pdf]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7380-domain-invariant-projection-learning-for-zero-shot-recognition.pdf)\n\n#### ECCV 2018\n+ **SZSL:** Jie Song, Chengchao Shen, Jie Lei, An-Xiang Zeng, Kairi Ou, Dacheng Tao, Mingli Song. \"Selective Zero-Shot Classification with Augmented Attributes.\" ECCV (2018). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FJie_Song_Selective_Zero-Shot_Classification_ECCV_2018_paper.pdf)]\n+ **LCP-SA:** Huajie Jiang, Ruiping Wang, Shiguang Shan, Xilin Chen. \"Learning Class Prototypes via Structure Alignment for Zero-Shot Recognition.\" ECCV (2018). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FHuajie_Jiang_Learning_Class_Prototypes_ECCV_2018_paper.pdf)]\n+ **MC-ZSL:** Rafael Felix, Vijay Kumar B. G., Ian Reid, Gustavo Carneiro. \"Multi-modal Cycle-consistent Generalized Zero-Shot Learning.\" ECCV (2018). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FRAFAEL_FELIX_Multi-modal_Cycle-consistent_Generalized_ECCV_2018_paper.pdf)] [[code]](https:\u002F\u002Fgithub.com\u002Frfelixmg\u002Ffrwgan-eccv18)\n\n\n#### CVPR 2018\n\n+ **GCN:**  Xiaolong Wang, Yufei Ye, Abhinav Gupta. \"Zero-shot Recognition via Semantic Embeddings and Knowledge Graphs.\" CVPR (2018). [[pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.08035.pdf)] [[code](https:\u002F\u002Fgithub.com\u002FJudyYe\u002Fzero-shot-gcn)]\n+ **PSR:** Yashas Annadani, Soma Biswas. \"Preserving Semantic Relations for Zero-Shot Learning.\" CVPR (2018). [[pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.03049.pdf)]\n+ **GAN-NT:** Yizhe Zhu, Mohamed Elhoseiny, Bingchen Liu, Xi Peng, Ahmed Elgammal. \"A Generative Adversarial Approach for Zero-Shot Learning From Noisy Texts.\" CVPR (2018). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhu_A_Generative_Adversarial_CVPR_2018_paper.pdf)]\n+ **TUE:** Jie Song, Chengchao Shen, Yezhou Yang, Yang Liu, Mingli Song. \"Transductive Unbiased Embedding for Zero-Shot Learning.\" CVPR (2018). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSong_Transductive_Unbiased_Embedding_CVPR_2018_paper.pdf)]\n+ **SP-AEN:** Long Chen, Hanwang Zhang, Jun Xiao, Wei Liu, Shih-Fu Chang. \"Zero-Shot Visual Recognition Using Semantics-Preserving Adversarial Embedding Networks.\" CVPR (2018). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Zero-Shot_Visual_Recognition_CVPR_2018_paper.pdf)] [[code](https:\u002F\u002Fgithub.com\u002Fzjuchenlong\u002Fsp-aen.cvpr18)]\n+ **ML-SKG:** Chung-Wei Lee, Wei Fang, Chih-Kuan Yeh, Yu-Chiang Frank Wang. \"Multi-Label Zero-Shot Learning With Structured Knowledge Graphs.\" CVPR (2018). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLee_Multi-Label_Zero-Shot_Learning_CVPR_2018_paper.pdf)] [[project](https:\u002F\u002Fpeople.csail.mit.edu\u002Fweifang\u002Fproject\u002Fvll18-mlzsl\u002F)]\n+ **GZSL-SE:** Vinay Kumar Verma, Gundeep Arora, Ashish Mishra, Piyush Rai. \"Generalized Zero-Shot Learning via Synthesized Examples.\" CVPR (2018). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FVerma_Generalized_Zero-Shot_Learning_CVPR_2018_paper.pdf)]\n+ **FGN:** Yongqin Xian, Tobias Lorenz, Bernt Schiele, Zeynep Akata. \"Feature Generating Networks for Zero-Shot Learning.\" CVPR (2018). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FXian_Feature_Generating_Networks_CVPR_2018_paper.pdf)] [[code](http:\u002F\u002Fdatasets.d2.mpi-inf.mpg.de\u002Fxian\u002Fcvpr18xian.zip)] [[project](https:\u002F\u002Fwww.mpi-inf.mpg.de\u002Fdepartments\u002Fcomputer-vision-and-multimodal-computing\u002Fresearch\u002Fzero-shot-learning\u002Ffeature-generating-networks-for-zero-shot-learning\u002F)]\n+ **LDF:** Yan Li, Junge Zhang, Jianguo Zhang, Kaiqi Huang. \"Discriminative Learning of Latent Features for Zero-Shot Recognition.\" CVPR (2018). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Discriminative_Learning_of_CVPR_2018_paper.pdf) \n+ **WSL:** Li Niu, Ashok Veeraraghavan, and Ashu Sabharwal. \"Webly Supervised Learning Meets Zero-shot Learning: A Hybrid Approach for Fine-grained Classification.\" CVPR (2018). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FNiu_Webly_Supervised_Learning_CVPR_2018_paper.pdf)\n\n#### TPAMI 2018\n\n+ **C-GUB:** Yongqin Xian, Christoph H. Lampert, Bernt Schiele, Zeynep Akata. \"Zero-shot learning-A comprehensive evaluation of the good, the bad and the ugly.\" TPAMI (2018). [[pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1707.00600.pdf)] [[project](https:\u002F\u002Fwww.mpi-inf.mpg.de\u002Fdepartments\u002Fcomputer-vision-and-multimodal-computing\u002Fresearch\u002Fzero-shot-learning\u002Fzero-shot-learning-the-good-the-bad-and-the-ugly\u002F)]\n\n#### AAAI 2018, 2017\n+ **GANZrl:** Bin Tong, Martin Klinkigt, Junwen Chen, Xiankun Cui, Quan Kong, Tomokazu Murakami, Yoshiyuki Kobayashi. \"Adversarial Zero-shot Learning With Semantic Augmentation.\" AAAI (2018). [[pdf]](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fview\u002F16805\u002F15965)\n+ **JDZsL:** Soheil Kolouri, Mohammad Rostami, Yuri Owechko, Kyungnam Kim. \"Joint Dictionaries for Zero-Shot Learning.\" AAAI (2018). [[pdf]](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fview\u002F16404\u002F16723)\n+ **VZSL:** Wenlin Wang, Yunchen Pu, Vinay Kumar Verma, Kai Fan, Yizhe Zhang, Changyou Chen, Piyush Rai, Lawrence Carin. \"Zero-Shot Learning via Class-Conditioned Deep Generative Models.\" AAAI (2018). [[pdf]](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fview\u002F16087\u002F16709)\n+ **AS:** Yuchen Guo, Guiguang Ding, Jungong Han, Sheng Tang. \"Zero-Shot Learning With Attribute Selection.\" AAAI (2018). [[pdf]](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fview\u002F16350\u002F16272)\n+ **DSSC:** Yan Li, Zhen Jia, Junge Zhang, Kaiqi Huang, Tieniu Tan.\"Deep Semantic Structural Constraints for Zero-Shot Learning.\" AAAI (2018). [[pdf]](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fview\u002F16309\u002F16294)\n+ **ZsRDA:** Yang Long, Li Liu, Yuming Shen, Ling Shao. \"Towards Affordable Semantic Searching: Zero-Shot Retrieval via Dominant Attributes.\" AAAI (2018). [[pdf]](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fview\u002F16626\u002F16314)\n+ **DCL:** Yuchen Guo, Guiguang Ding, Jungong Han, Yue Gao. \"Zero-Shot Recognition via Direct Classifier Learning with Transferred Samples and Pseudo Labels.\" AAAI (2017). [[pdf]](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI17\u002Fpaper\u002Fview\u002F14160\u002F14281)\n\n\n#### ICCV 2017\n+ **A2C:** Berkan Demirel, Ramazan Gokberk Cinbis, Nazli Ikizler-Cinbis. \"Attributes2Classname: A Discriminative Model for Attribute-Based Unsupervised Zero-Shot Learning.\" ICCV (2017). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FDemirel_Attributes2Classname_A_Discriminative_ICCV_2017_paper.pdf)] [[code]](https:\u002F\u002Fgithub.com\u002Fberkandemirel\u002Fattributes2classname)\n+ **PVE:** Soravit Changpinyo, Wei-Lun Chao, Fei Sha. \"Predicting Visual Exemplars of Unseen Classes for Zero-Shot Learning.\" ICCV (2017). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FChangpinyo_Predicting_Visual_Exemplars_ICCV_2017_paper.pdf)][[code](https:\u002F\u002Fgithub.com\u002Fpujols\u002FZero-shot-learning-journal)]\n+ **LDL:** Huajie Jiang, Ruiping Wang, Shiguang Shan, Yi Yang, Xilin Chen. \"Learning Discriminative Latent Attributes for Zero-Shot Classification.\" ICCV (2017). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FJiang_Learning_Discriminative_Latent_ICCV_2017_paper.pdf)]\n\n#### CVPR 2017\n\n+ **Deep-SCoRe:** Pedro Morgado, Nuno Vasconcelos.\"Semantically Consistent Regularization for Zero-Shot Recognition.\" CVPR (2017). [[pdf](http:\u002F\u002Fwww.svcl.ucsd.edu\u002F~morgado\u002Fscore\u002Fscore-cvpr17.pdf)] [[code](https:\u002F\u002Fgithub.com\u002Fpedro-morgado\u002Fscore-zeroshot)]\n+ **DEM:** Li Zhang, Tao Xiang, Shaogang Gong. \"Learning a Deep Embedding Model for Zero-Shot Learning.\" CVPR (2017). [[pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.05088.pdf)] [[code](https:\u002F\u002Fgithub.com\u002Flzrobots\u002FDeepEmbeddingModel_ZSL)]\n+ **VDS:** Yang Long, Li Liu, Ling Shao, Fumin Shen, Guiguang Ding, Jungong Han. \"From Zero-Shot Learning to Conventional Supervised Classification: Unseen Visual Data Synthesis.\" CVPR (2017). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLong_From_Zero-Shot_Learning_CVPR_2017_paper.html)]\n+ **ESD:** Zhengming Ding, Ming Shao, Yun Fu. \"Low-Rank Embedded Ensemble Semantic Dictionary for Zero-Shot Learning.\" CVPR (2017). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FDing_Low-Rank_Embedded_Ensemble_CVPR_2017_paper.pdf)]\n+ **SAE:** Elyor Kodirov, Tao Xiang, Shaogang Gong. \"Semantic Autoencoder for Zero-Shot Learning.\" CVPR (2017). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FKodirov_Semantic_Autoencoder_for_CVPR_2017_paper.pdf) [[code](https:\u002F\u002Fgithub.com\u002FElyorcv\u002FSAE)]\n+ **DVSM:** Yanan Li, Donghui Wang, Huanhang Hu, Yuetan Lin, Yueting Zhuang. \"Zero-Shot Recognition Using Dual Visual-Semantic Mapping Paths\". CVPR (2017). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FLi_Zero-Shot_Recognition_Using_CVPR_2017_paper.pdf)]\n+ **MTF-MR:** Xing Xu, Fumin Shen, Yang Yang, Dongxiang Zhang, Heng Tao Shen, Jingkuan Song. \"Matrix Tri-Factorization With Manifold Regularizations for Zero-Shot Learning.\" CVPR (2017). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FXu_Matrix_Tri-Factorization_With_CVPR_2017_paper.pdf)]\n+ Nour Karessli, Zeynep Akata, Bernt Schiele, Andreas Bulling. \"Gaze Embeddings for Zero-Shot Image Classification.\" CVPR (2017). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.09309.pdf) [[code]](https:\u002F\u002Fgithub.com\u002FNoura-kr\u002FCVPR17)\n+ **GUB:** Yongqin Xian, Bernt Schiele, Zeynep Akata. \"Zero-Shot learning - The Good, the Bad and the Ugly.\" CVPR (2017). \n[[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FXian_Zero-Shot_Learning_-_CVPR_2017_paper.pdf) [[code]](https:\u002F\u002Fgithub.com\u002Fpedro-morgado\u002Fscore-zeroshot)\n\n\n#### CVPR 2016\n\n+ **MC-ZSL:** Zeynep Akata, Mateusz Malinowski, Mario Fritz, Bernt Schiele. \"Multi-Cue Zero-Shot Learning With Strong Supervision.\" CVPR (2016). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FAkata_Multi-Cue_Zero-Shot_Learning_CVPR_2016_paper.pdf)] [[code]](https:\u002F\u002Fwww.mpi-inf.mpg.de\u002Findex.php?id=2935)\n+ **LATEM:** Yongqin Xian, Zeynep Akata, Gaurav Sharma, Quynh Nguyen, Matthias Hein, Bernt Schiele. \"Latent Embeddings for Zero-Shot Classification.\" CVPR (2016). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FXian_Latent_Embeddings_for_CVPR_2016_paper.pdf)][[code](http:\u002F\u002Fdatasets.d2.mpi-inf.mpg.de\u002Fyxian16cvpr\u002FlatEm.zip)]\n+ **LIM:** Ruizhi Qiao, Lingqiao Liu, Chunhua Shen, Anton van den Hengel. \"Less Is More: Zero-Shot Learning From Online Textual Documents With Noise Suppression.\" CVPR (2016). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FQiao_Less_Is_More_CVPR_2016_paper.pdf)]\n+ **SYNC:** Soravit Changpinyo, Wei-Lun Chao, Boqing Gong, Fei Sha. \"Synthesized Classifiers for Zero-Shot Learning.\" CVPR (2016). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FChangpinyo_Synthesized_Classifiers_for_CVPR_2016_paper.pdf)][[code](https:\u002F\u002Fgithub.com\u002Fpujols\u002Fzero-shot-learning)]\n+ **RML:** Ziad Al-Halah, Makarand Tapaswi, Rainer Stiefelhagen. \"Recovering the Missing Link: Predicting Class-Attribute Associations for Unsupervised Zero-Shot Learning.\" CVPR (2016). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FAl-Halah_Recovering_the_Missing_CVPR_2016_paper.pdf)]\n+ **SLE:** Ziming Zhang, Venkatesh Saligrama. \"Zero-Shot Learning via Joint Latent Similarity Embedding.\" CVPR (2016). [[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FZhang_Zero-Shot_Learning_via_CVPR_2016_paper.pdf)] [[code](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1RimUgUlf2tfpntzlxdlYaAvm34HX0fUb\u002Fview?usp=sharing)]\n\n\n#### ECCV 2016\n+ Wei-Lun Chao, Soravit Changpinyo, Boqing Gong2, Fei Sha. \"An Empirical Study and Analysis of Generalized Zero-Shot Learning for Object Recognition in the Wild.\" ECCV (2016). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1605.04253.pdf)\n+ **MTE:** Xun Xu, Timothy M. Hospedales, Shaogang Gong. \"Multi-Task Zero-Shot Action Recognition with Prioritised Data Augmentation.\" ECCV (2016). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.08663.pdf)\n+ Ziming Zhang, Venkatesh Saligrama.\"Zero-Shot Recognition via Structured Prediction.\" ECCV (2016). [[pdf]](https:\u002F\u002Fpdfs.semanticscholar.org\u002Fbe96\u002F1637db8561b027fd48788c64e7919c7cd760.pdf)\n+ Maxime Bucher, Stephane Herbin, Frederic Jurie.\"Improving Semantic Embedding Consistency by Metric Learning for Zero-Shot Classification.\" ECCV (2016). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1607.08085.pdf)\n\n#### AAAI 2016\n+ **RKT:** Donghui Wang, Yanan Li, Yuetan Lin, Yueting Zhuang. \"Relational Knowledge Transfer for Zero-Shot Learning.\" AAAI (2016). [[pdf]](https:\u002F\u002Fwww.aaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI16\u002Fpaper\u002Fview\u002F11802\u002F11854)\n\n#### TPAMI 2016, 2015, 2013\n+ **ALE:** Zeynep Akata, Florent Perronnin, Zaid Harchaoui, and Cordelia Schmid. \"Label-Embedding for Image Classification.\" TPAMI (2016). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1503.08677.pdf)\n+ **TMV:** Yanwei Fu, Timothy M. Hospedales, Tao Xiang, Shaogang Gong. \"Transductive Multi-view Zero-Shot Learning.\" TPAMI (2015) [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1501.04560.pdf) [[code]](https:\u002F\u002Fgithub.com\u002Fyanweifu\u002Fembedding_zero-shot-learning)\n+ **DAP:** Christoph H. Lampert, Hannes Nickisch and Stefan Harmeling. \"Attribute-Based Classification for Zero-Shot\nVisual Object Categorization.\" TPAMI (2013) [[pdf]](http:\u002F\u002Fpub.ist.ac.at\u002F~chl\u002Fpapers\u002Flampert-pami2013.pdf)\n\n#### CVPR 2015\n+ **SJE:** Zeynep Akata, Scott Reed, Daniel Walter, Honglak Lee, Bernt Schiele. \"Evaluation of Output Embeddings for Fine-Grained Image Classification.\" CVPR (2015). [[pdf]](https:\u002F\u002Fwww.mpi-inf.mpg.de\u002Ffileadmin\u002Finf\u002Fd2\u002Fakata\u002F1690.pdf) [[code]](https:\u002F\u002Fwww.mpi-inf.mpg.de\u002Findex.php?id=2325) \n+ Zhenyong Fu, Tao Xiang, Elyor Kodirov, Shaogang Gong. \"Zero-Shot Object Recognition by Semantic Manifold Distance.\" CVPR (2015). [[pdf]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fpapers\u002FFu_Zero-Shot_Object_Recognition_2015_CVPR_paper.pdf)\n\n#### ICCV 2015\n+ **SSE:** Ziming Zhang, Venkatesh Saligrama. \"Zero-Shot Learning via Semantic Similarity Embedding.\" ICCV (2015). [[pdf]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fpapers\u002FZhang_Zero-Shot_Learning_via_ICCV_2015_paper.pdf) [[code]](https:\u002F\u002Fzimingzhang.wordpress.com\u002Fsource-code\u002F)\n+ **LRL:** Xin Li, Yuhong Guo, Dale Schuurmans.\"Semi-Supervised Zero-Shot Classification with Label Representation Learning.\" ICCV (2015). [[pdf]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fpapers\u002FLi_Semi-Supervised_Zero-Shot_Classification_ICCV_2015_paper.pdf)\n+ **UDA:** Elyor Kodirov, Tao Xiang, Zhenyong Fu, Shaogang Gong. \"Unsupervised Domain Adaptation for Zero-Shot Learning.\" ICCV (2015). [[pdf]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fpapers\u002FKodirov_Unsupervised_Domain_Adaptation_ICCV_2015_paper.pdf)\n+ Jimmy Lei Ba, Kevin Swersky, Sanja Fidler, Ruslan Salakhutdinov. \"Predicting Deep Zero-Shot Convolutional Neural Networks using Textual Descriptions.\" ICCV (2015). [[pdf]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fpapers\u002FBa_Predicting_Deep_Zero-Shot_ICCV_2015_paper.pdf)\n\n\n#### NIPS 2014, 2013, 2009\n+ Dinesh Jayram, Kristen Grauman.\"Zero-Shot Recognition with Unreliable Attributes\" NIPS (2014) [[pdf]](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5290-zero-shot-recognition-with-unreliable-attributes.pdf)\n+ **CMT:** Richard Socher, Milind Ganjoo, Christopher D. Manning, Andrew Y. Ng. \"Zero-Shot Learning Through Cross-Modal Transfer\" NIPS (2013) [[pdf]](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5027-zero-shot-learning-through-cross-modal-transfer.pdf)  [[code]](https:\u002F\u002Fgithub.com\u002Fmganjoo\u002Fzslearning)\n+ **DeViSE:** Andrea Frome, Greg S. Corrado, Jonathon Shlens, Samy Bengio, Jeffrey Dean, Marc’Aurelio Ranzato, Tomas Mikolov.\"DeViSE: A Deep Visual-Semantic Embedding Model\" NIPS (2013) [[pdf]](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5204-devise-a-deep-visual-semantic-embedding-model.pdf)\n+ Mark Palatucci, Dean Pomerleau, Geoffrey Hinton, Tom M. Mitchell. \"Zero-Shot Learning with Semantic Output Codes\" NIPS (2009) [[pdf]](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F3650-zero-shot-learning-with-semantic-output-codes.pdf)\n\n#### ECCV 2014\n+ **TMV-BLP:** Yanwei Fu, Timothy M. Hospedales, Tao Xiang, Zhenyong Fu, Shaogang Gong. \"Transductive Multi-view Embedding for\nZero-Shot Recognition and Annotation\" ECCV (2014).[[pdf]](https:\u002F\u002Fwww.eecs.qmul.ac.uk\u002F~txiang\u002Fpublications\u002FFu_et_al_embedding_eccv2014.pdf) [[code]](https:\u002F\u002Fgithub.com\u002Fyanweifu\u002Fembedding_zero-shot-learning)\n+ Stanislaw Antol, Larry Zitnick, Devi Parikh. \"Zero-Shot Learning via Visual Abstraction.\" ECCV (2014). [[pdf]](https:\u002F\u002Fcomputing.ece.vt.edu\u002F~santol\u002Fprojects\u002Fzsl_via_visual_abstraction\u002Feccv2014_zsl_via_visual_abstraction.pdf) [[code]](https:\u002F\u002Fgithub.com\u002FStanislawAntol\u002Fzsl_via_visual_abstraction) [[project]](https:\u002F\u002Fcomputing.ece.vt.edu\u002F~santol\u002Fprojects\u002Fzsl_via_visual_abstraction\u002F)\n\n#### CVPR 2013\n+ **ALE:** Z.Akata, F. Perronnin, Z. Harchaoui, and C. Schmid. \"Label Embedding for Attribute-Based Classification.\" CVPR (2013). [[pdf]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2013\u002Fpapers\u002FAkata_Label-Embedding_for_Attribute-Based_2013_CVPR_paper.pdf)\n\n#### Other Papers\n+ **EsZSL:** Bernardino Romera-Paredes, Philip H. S. Torr. \"An embarrassingly simple approach to zero-shot learning.\" ICML (2015). [[pdf]](http:\u002F\u002Fproceedings.mlr.press\u002Fv37\u002Fromera-paredes15.pdf) [[Code]](https:\u002F\u002Fgithub.com\u002FMLWave\u002Fextremely-simple-one-shot-learning)\n+ **AEZSL:** \"Zero-Shot Learning via Category-Specific Visual-Semantic Mapping and Label Refinement\" IEEE SPS (2018). [[pdf]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8476580)\n+ **ZSGD:** Tiancheng Zhao, Maxine Eskenazi. \"Zero-Shot Dialog Generation with Cross-Domain Latent Actions\" SIGDIAL (2018). [[pdf]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.04803v1) [[code]](https:\u002F\u002Fgithub.com\u002Fsnakeztc\u002FNeuralDialog-ZSDG)\n+ Yanwei Fu, Tao Xiang, Yu-Gang Jiang, Xiangyang Xue, Leonid Sigal, Shaogang Gong \"Recent Advances in Zero-shot Recognition\".  IEEE Signal Processing Magazine. [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.04837.pdf)\n+ Michael Kampffmeyer, Yinbo Chen, Xiaodan Liang, Hao Wang, Yujia Zhang, Eric P. Xing \"Rethinking Knowledge Graph Propagation for Zero-Shot Learning\" arXiv (2018). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.11724v2.pdf) [[code]](https:\u002F\u002Fgithub.com\u002Fcyvius96\u002Fadgpm)\n+ **Survey:** Wei Wang, Vincent W. Zheng, Han Yu, Chunyan Miao. \"A Survey of Zero-Shot Learning: Settings, Methods, and Applications\". TIST (2019).  [[pdf]](https:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?doid=3306498.3293318)\n\n\n### Datasets\n+ **LAD:** Large-scale Attribute Dataset. Categories:230. [[link]](https:\u002F\u002Fgithub.com\u002FPatrickZH\u002FA-Large-scale-Attribute-Dataset-for-Zero-shot-Learning)\n+ **CUB:** Caltech-UCSD Birds. Categories:200. [[link]](http:\u002F\u002Fwww.vision.caltech.edu\u002Fvisipedia\u002FCUB-200-2011.html)\n+ **AWA2:** Animals with Attributes. Categories:50. [[link]](https:\u002F\u002Fcvml.ist.ac.at\u002FAwA2\u002F)\n+ **aPY:** attributes Pascal and Yahoo. Categories:32 [[link]](http:\u002F\u002Fvision.cs.uiuc.edu\u002Fattributes\u002F)\n+ **Flowers Dataset:** There are two datasets, Categories: 17 and 102. [[link]](http:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fdata\u002Fflowers\u002F)\n+ **SUN:** Scene Attributes. Categories:717. [[link]](http:\u002F\u002Fcs.brown.edu\u002F~gmpatter\u002Fsunattributes.html)\n+ **HMDB51** : a large human motion database. Categories:51 [[link]](https:\u002F\u002Fserre-lab.clps.brown.edu\u002Fresource\u002Fhmdb-a-large-human-motion-database\u002F#Downloads)\n+ **UCF101** :  an action image recognition dataset of real action videos, collected from YouTube. Categories:101 [[link]](https:\u002F\u002Fwww.crcv.ucf.edu\u002Fdata\u002FUCF101.php)\n\n### Starter Code\nThis repository contains a `Demo` folder which has a Jupyter Notebook step-by-step code to \"An embarrassingly simple approach to zero-shot learning.\" ICML (2015).\nThis can be used as an introductory code to obtain the basic understanding of Zero-shot Learning.\n\n\n## Other resources\n+ https:\u002F\u002Fmedium.com\u002F@alitech_2017\u002Ffrom-zero-to-hero-shaking-up-the-field-of-zero-shot-learning-c43208f71332\n+ https:\u002F\u002Fwww.analyticsindiamag.com\u002Fwhat-is-zero-shot-learning\u002F\n+ https:\u002F\u002Fmedium.com\u002F@cetinsamet\u002Fzero-shot-learning-53080995d45f\n+ https:\u002F\u002Famitness.com\u002F2020\u002F05\u002Fzero-shot-text-classification\u002F\n\n## License\n\n[![CC0](http:\u002F\u002Fmirrors.creativecommons.org\u002Fpresskit\u002Fbuttons\u002F88x31\u002Fsvg\u002Fcc-zero.svg)](https:\u002F\u002Fcreativecommons.org\u002Fpublicdomain\u002Fzero\u002F1.0\u002F)\n","# 零样本学习\n\n[![Awesome](https:\u002F\u002Fcdn.rawgit.com\u002Fsindresorhus\u002Fawesome\u002Fd7305f38d29fed78fa85652e3a63e154dd8e8829\u002Fmedia\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n\n一份精心整理的资源列表，包含论文、标准数据集上的对比结果以及与零样本学习相关的各类链接。\n\n## 贡献说明\n\n欢迎各位贡献！请参阅[issue](https:\u002F\u002Fgithub.com\u002Fchichilicious\u002Fawesome-zero-shot-learning\u002Fissues\u002F2)，其中列出了计划收录到本仓库的内容。如果您希望在这些范围内进行贡献，请随时提交PR。若您有新增板块的建议，请先提出issue并讨论后再提交PR。\n\n## 目录\n+ [论文](#Papers)\n+ [数据集](#Datasets)\n+ [零样本学习入门代码](#Starter-Code)\n+ [其他资源](#Other-resources)\n\n\n## 零样本目标分类\n\n### 论文\n\n#### CVPR 2021\n+ 刘博、董秋蕾和胡展义。“基于自训练的转导式零样本学习中的难度采样” [[pdf]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FBo_Hardness_Sampling_for_Self-Training_Based_Transductive_Zero-Shot_Learning_CVPR_2021_paper.pdf)\n+ 韩宗彦、傅振勇、陈硕和杨健：“面向广义零样本学习的对比嵌入” [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2103.16173.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002FHanzy1996\u002FCE-GZSL)\n+ 刘洋、周磊、白晓、黄一飞、顾林、周俊、原田达也。“面向零样本学习的目标导向注视估计” [[pdf]](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FLiu_Goal-Oriented_Gaze_Estimation_for_Zero-Shot_Learning_CVPR_2021_paper.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002Fosierboy\u002FGEM-ZSL)\n+ “用于组合型零样本学习的图嵌入学习” [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2102.01987.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002FExplainableML\u002Fczsl) \n\n#### ICLR 2021\n+ 刘璐、周天翼、龙国栋、蒋静、董宣毅、张成奇。“用于广义零样本学习的等距传播网络。” [[pdf]](https:\u002F\u002Fopenreview.net\u002Fpdf?id=-mWcQVLPSPy)\n\n#### ECCV 2020 \n+ **TF-vaegan**: Sanath Narayan\u003Csup>\\*\u003C\u002Fsup> , Akshita Gupta\u003Csup>\\*\u003C\u002Fsup> , Fahad Shahbaz Khan, Cees G. M. Snoek, Ling Shao. “潜在嵌入反馈与判别特征用于零样本分类。” ECCV (2020). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.07833.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002Fakshitac8\u002Ftfvaegan).\n+ **LsrGAN**: Maunil R Vyas, Hemanth Venkateswara, and Sethuraman Panchanathan. “利用已见与未见语义关系进行生成式零样本学习。” ECCV (2020). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2007.09549.pdf). \n+ 陈星宇、兰旭光、孙富春和郑南宁。“基于边界的一类外分布分类器用于广义零样本学习。” ECCV (2020). [[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2008.04872.pdf).\n+ “区域图嵌入网络用于零样本学习。” ECCV SPOTLIGHT(2020). [[pdf]]().\n\n#### ICLR 2020\n+ 特里斯坦·西尔万、琳达·佩特里尼、德文·赫尔姆。“零样本学习中的局部性和组合性。” ICLR (2020). [[pdf]](https:\u002F\u002Fopenreview.net\u002Fpdf?id=Hye_V0NKwr).\n\n#### NeurIPS 2019\n+ **VSC:** 万子宇、陈东东、李岩、闫兴光、张军格、于亦舟、廖晶。“带有视觉结构约束的转导式零样本学习。” NeurIPS (2019). [[pdf]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9188-transductive-zero-shot-learning-with-visual-structure-constraint.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002Fraywzy\u002FVSC).\n+ 柳贤宇、李宝熙。“通过同时生成与学习实现零样本学习。” NeurIPS (2019). [[pdf]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8300-zero-shot-learning-via-simultaneous-generating-and-learning.pdf).\n+ 倪坚、张尚航、谢海勇。“用于广义零样本学习的双重对抗语义一致性网络。” NeurIPS (2019). [[pdf]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8846-dual-adversarial-semantics-consistent-network-for-generalized-zero-shot-learning.pdf).\n+ **SGMA:** 朱一哲、谢建文、唐志强、彭曦、艾哈迈德·埃尔加马尔。NeurIPS (2019). [[pdf]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9632-semantic-guided-multi-attention-localization-for-zero-shot-learning.pdf)\n\n#### ICCV 2019\n+ **CIZSL:** 穆罕默德·埃尔霍赛尼、穆罕默德·埃尔菲基。“受创造力启发的零样本学习。” ICCV (2019). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FElhoseiny_Creativity_Inspired_Zero-Shot_Learning_ICCV_2019_paper.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002Fmhelhoseiny\u002FCIZSL)\n+ **LFGAA+SA:** 刘洋、郭继顺、蔡登、何小飞。“零样本学习中用于语义消歧的属性注意力。” ICCV (2019). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FLiu_Attribute_Attention_for_Semantic_Disambiguation_in_Zero-Shot_Learning_ICCV_2019_paper.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002FZJULearning\u002FAttentionZSL)\n+ **TCN:** 蒋华杰、王瑞平、单士光、陈希林。“用于广义零样本学习的可迁移对比网络。” ICCV (2019). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FJiang_Transferable_Contrastive_Network_for_Generalized_Zero-Shot_Learning_ICCV_2019_paper.pdf)\n+ **GXE:** 李凯、闵仁强和傅云。“重新思考零样本学习：从条件视觉分类的角度出发。” ICCV (2019). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FLi_Rethinking_Zero-Shot_Learning_A_Conditional_Visual_Classification_Perspective_ICCV_2019_paper.pdf)\n+ 朱一哲1、谢建文、刘炳辰、艾哈迈德·埃尔加马尔。“通过交替反向传播学习特征到特征的转换器，用于生成式零样本学习。” ICCV (2019). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FZhu_Learning_Feature-to-Feature_Translator_by_Alternating_Back-Propagation_for_Generative_Zero-Shot_Learning_ICCV_2019_paper.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002FEthanZhu90\u002FZSL_ABP)\n+ 扬尼克·勒卡修、埃尔韦·勒博涅、米歇尔·克鲁西亚努“在三元组损失中建模零样本学习中的类间与类内关系。” ICCV (2019). [[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2019\u002Fpapers\u002FLe_Cacheux_Modeling_Inter_and_Intra-Class_Relations_in_the_Triplet_Loss_for_ICCV_2019_paper.pdf)\n\n#### CVPR 2019\n+ **CADA-VAE:** Edgar Schönfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, Zeynep Akata. “通过对齐变分自编码器实现广义零样本和少样本学习”。CVPR（2019）。[[pdf]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.01784) [[代码]](https:\u002F\u002Fgithub.com\u002Fedgarschnfld\u002FCADA-VAE-PyTorch)\n+ **GDAN:** He Huang, Changhu Wang, Philip S. Yu, Chang-Dong Wang. “用于广义零样本学习的生成式双重对抗网络”。CVPR（2019）。[[pdf]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.04857) [[代码]](https:\u002F\u002Fgithub.com\u002Fstevehuanghe\u002FGDAN)\n+ **DeML:** Binghui Chen, Weihong Deng. “基于混合注意力解耦度量学习的零样本图像检索”。CVPR（2019）。[[pdf]](http:\u002F\u002Fwww.bhchen.cn\u002Fpaper\u002Fcvpr19.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002Fchenbinghui1\u002FHybrid-Attention-based-Decoupled-Metric-Learning)\n+ **Gzsl-VSE:** Pengkai Zhu, Hanxiao Wang, Venkatesh Saligrama. “基于视觉语义嵌入的广义零样本识别”。CVPR（2019）。[[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.07993.pdf)\n+ **LisGAN:** Jingjing Li, Mengmeng Jin, Ke Lu, Zhengming Ding, Lei Zhu, Zi Huang. “利用生成式零样本学习中的不变性特征”。CVPR（2019）。[[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.04092.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002Flijin118\u002FLisGAN)\n+ **DGP:** Michael Kampffmeyer, Yinbo Chen, Xiaodan Liang, Hao Wang, Yujia Zhang, Eric P. Xing. “重新思考知识图谱传播在零样本学习中的应用”。CVPR（2019）。[[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.11724.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002Fcyvius96\u002FDGP)\n+ **DAZL:** Yuval Atzmon, Gal Chechik. “领域感知的广义零样本学习”。CVPR（2019）。[[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1812.09903.pdf)\n+ **PrEN:** Meng Ye, Yuhong Guo. “用于零样本识别的渐进式集成网络”。CVPR（2019）。[[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.07473.pdf)\n+ Tristan Hascoet, Yasuo Ariki, Tetsuya Takiguchi. “关于通用物体的零样本学习”。CVPR（2019）。[[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.04957.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002FTristHas\u002FGOZ)\n+ **SABR-T:** Akanksha Paul, Naraynan C Krishnan, Prateek Munjal. “语义对齐的偏置减少零样本学习”。CVPR（2019）。[[pdf]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.07659)\n+ **AREN:** Guo-Sen Xie, Li Liu, Xiaobo Jin, Fan Zhu, Zheng Zhang, Jie Qin, Yazhou Yao, Ling Shao. “用于零样本学习的注意力区域嵌入网络”。CVPR（2019）。[[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FXie_Attentive_Region_Embedding_Network_for_Zero-Shot_Learning_CVPR_2019_paper.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002Fgsx0\u002FAttentive-Region-Embedding-Network-for-Zero-shot-Learning)\n+ Zhengming Ding, Hongfu Liu. “用于零样本学习的边缘化潜在语义编码器”。CVPR（2019）。[[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FDing_Marginalized_Latent_Semantic_Encoder_for_Zero-Shot_Learning_CVPR_2019_paper.pdf)\n+ **PQZSL:** Jin Li, Xuguang Lan, Yang Liu, Le Wang, Nanning Zheng. “使用乘积量化器压缩未知类别以实现高效的零样本分类”。CVPR（2019）。[[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FLi_Compressing_Unknown_Images_With_Product_Quantizer_for_Efficient_Zero-Shot_Classification_CVPR_2019_paper.pdf)\n+ Mert Bulent Sariyildiz, Ramazan Gokberk Cinbis. “用于零样本学习的梯度匹配生成网络”。CVPR（2019）。[[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FSariyildiz_Gradient_Matching_Generative_Networks_for_Zero-Shot_Learning_CVPR_2019_paper.pdf)\n+ Bin Tong, Chao Wang, Martin Klinkigt, Yoshiyuki Kobayashi, Yuuichi Nonaka. “用于零样本学习的判别性潜在特征的层次化解耦”。CVPR（2019）。[[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FTong_Hierarchical_Disentanglement_of_Discriminative_Latent_Features_for_Zero-Shot_Learning_CVPR_2019_paper.pdf)\n\n#### NeurIPS 2018\n+ **DCN:** Shichen Liu, Mingsheng Long, Jianmin Wang, Michael I. Jordan.“利用深度校准网络实现广义零样本学习” NeurIPS（2018）。[[pdf]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7471-generalized-zero-shot-learning-with-deep-calibration-network.pdf)\n+ **S2GA:** Yunlong Yu, Zhong Ji, Yanwei Fu, Jichang Guo, Yanwei Pang, Zhongfei (Mark) Zhang.“用于细粒度零样本学习的堆叠语义引导注意力模型”。NeurIPS（2018）。[[pdf]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7839-stacked-semantics-guided-attention-model-for-fine-grained-zero-shot-learning.pdf)\n+ **DIPL:** An Zhao, Mingyu Ding, Jiechao Guan, Zhiwu Lu, Tao Xiang, Ji-Rong Wen“用于零样本识别的领域不变投影学习”。NeurIPS（2018）。[[pdf]](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7380-domain-invariant-projection-learning-for-zero-shot-recognition.pdf)\n\n#### ECCV 2018\n+ **SZSL:** Jie Song, Chengchao Shen, Jie Lei, An-Xiang Zeng, Kairi Ou, Dacheng Tao, Mingli Song. “基于增强属性的选择性零样本分类”。ECCV（2018）。[[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FJie_Song_Selective_Zero-Shot_Classification_ECCV_2018_paper.pdf)\n+ **LCP-SA:** Huajie Jiang, Ruiping Wang, Shiguang Shan, Xilin Chen. “通过结构对齐学习类原型以实现零样本识别”。ECCV（2018）。[[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FHuajie_Jiang_Learning_Class_Prototypes_ECCV_2018_paper.pdf)\n+ **MC-ZSL:** Rafael Felix, Vijay Kumar B. G., Ian Reid, Gustavo Carneiro. “多模态循环一致的广义零样本学习”。ECCV（2018）。[[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ECCV_2018\u002Fpapers\u002FRAFAEL_FELIX_Multi-modal_Cycle-consistent_Generalized_ECCV_2018_paper.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002Frfelixmg\u002Ffrwgan-eccv18)\n\n\n#### CVPR 2018\n\n+ **GCN:** 王小龙、叶宇飞、阿比纳夫·古普塔。“基于语义嵌入和知识图谱的零样本识别”。CVPR（2018）。[[pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.08035.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002FJudyYe\u002Fzero-shot-gcn)]\n+ **PSR:** 亚沙斯·安纳达尼、索玛·比斯瓦斯。“为零样本学习保留语义关系”。CVPR（2018）。[[pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.03049.pdf)]\n+ **GAN-NT:** 朱一哲、穆罕默德·埃尔霍赛尼、刘冰辰、彭曦、艾哈迈德·埃尔加马尔。“一种基于生成对抗网络的、从噪声文本中进行零样本学习的方法”。CVPR（2018）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FZhu_A_Generative_Adversarial_CVPR_2018_paper.pdf)]\n+ **TUE:** 宋杰、沈成超、杨业洲、刘洋、宋明利。“用于零样本学习的直推式无偏嵌入”。CVPR（2018）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FSong_Transductive_Unbiased_Embedding_CVPR_2018_paper.pdf)]\n+ **SP-AEN:** 陈龙、张汉旺、肖俊、刘伟、施福昌。“使用语义保持对抗嵌入网络的零样本视觉识别”。CVPR（2018）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FChen_Zero-Shot_Visual_Recognition_CVPR_2018_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fzjuchenlong\u002Fsp-aen.cvpr18)]\n+ **ML-SKG:** 李忠伟、方威、叶志宽、王宇彰。“基于结构化知识图谱的多标签零样本学习”。CVPR（2018）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLee_Multi-Label_Zero-Shot_Learning_CVPR_2018_paper.pdf)] [[项目](https:\u002F\u002Fpeople.csail.mit.edu\u002Fweifang\u002Fproject\u002Fvll18-mlzsl\u002F)]\n+ **GZSL-SE:** 文奈·库马尔·维尔马、贡迪普·阿罗拉、阿希什·米什拉、皮尤什·赖。“通过合成样例进行广义零样本学习”。CVPR（2018）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FVerma_Generalized_Zero-Shot_Learning_CVPR_2018_paper.pdf)]\n+ **FGN:** 西安永钦、托比亚斯·洛伦茨、伯恩特·席勒、泽伊内普·阿卡塔。“用于零样本学习的特征生成网络”。CVPR（2018）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FXian_Feature_Generating_Networks_CVPR_2018_paper.pdf)] [[代码](http:\u002F\u002Fdatasets.d2.mpi-inf.mpg.de\u002Fxian\u002Fcvpr18xian.zip)] [[项目](https:\u002F\u002Fwww.mpi-inf.mpg.de\u002Fdepartments\u002Fcomputer-vision-and-multimodal-computing\u002Fresearch\u002Fzero-shot-learning\u002Ffeature-generating-networks-for-zero-shot-learning\u002F)]\n+ **LDF:** 李燕、张军格、张建国、黄凯奇。“用于零样本识别的潜在特征判别性学习”。CVPR（2018）。[[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FLi_Discriminative_Learning_of_CVPR_2018_paper.pdf) \n+ **WSL:** 牛丽、阿肖克·维拉拉加万和阿舒·萨巴瓦尔。“网络监督学习与零样本学习的结合：一种细粒度分类的混合方法”。CVPR（2018）。[[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FNiu_Webly_Supervised_Learning_CVPR_2018_paper.pdf)\n\n#### TPAMI 2018\n\n+ **C-GUB:** 西安永钦、克里斯托夫·H·兰佩特、伯恩特·席勒、泽伊内普·阿卡塔。“零样本学习——对优点、缺点和不足的全面评估”。TPAMI（2018）。[[pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1707.00600.pdf)] [[项目](https:\u002F\u002Fwww.mpi-inf.mpg.de\u002Fdepartments\u002Fcomputer-vision-and-multimodal-computing\u002Fresearch\u002Fzero-shot-learning\u002Fzero-shot-learning-the-good-the-bad-and-the-ugly\u002F)]\n\n#### AAAI 2018, 2017\n+ **GANZrl:** 彭斌、马丁·克林基特、陈俊文、崔宪坤、孔权、村上友和、小林义幸。“带有语义增强的对抗式零样本学习”。AAAI（2018）。[[pdf]](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fview\u002F16805\u002F15965)\n+ **JDZsL:** 索黑尔·科卢里、穆罕默德·罗斯塔米、尤里·欧韦奇科、金京南。“用于零样本学习的联合字典”。AAAI（2018）。[[pdf]](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fview\u002F16404\u002F16723)\n+ **VZSL:** 王文林、蒲云晨、文奈·库马尔·维尔马、范凯、张亦哲、陈昌佑、皮尤什·赖、劳伦斯·卡林。“通过类别条件深度生成模型进行零样本学习”。AAAI（2018）。[[pdf]](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fview\u002F16087\u002F16709)\n+ **AS:** 郭宇晨、丁贵光、韩俊功、唐盛。“基于属性选择的零样本学习”。AAAI（2018）。[[pdf]](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fview\u002F16350\u002F16272)\n+ **DSSC:** 李燕、贾珍、张军格、黄凯奇、谭天牛。“用于零样本学习的深层语义结构约束”。AAAI（2018）。[[pdf]](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fview\u002F16309\u002F16294)\n+ **ZsRDA:** 杨龙、刘莉、申玉明、邵玲。“迈向经济实惠的语义搜索：基于主导属性的零样本检索”。AAAI（2018）。[[pdf]](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI18\u002Fpaper\u002Fview\u002F16626\u002F16314)\n+ **DCL:** 郭宇晨、丁贵光、韩俊功、高悦。“通过迁移样本和伪标签进行直接分类器学习实现零样本识别”。AAAI（2017）。[[pdf]](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI17\u002Fpaper\u002Fview\u002F14160\u002F14281)\n\n\n#### ICCV 2017\n+ **A2C:** 贝尔坎·德米雷尔、拉马赞·戈克贝尔克·琴比斯、纳兹莉·伊基兹勒-琴比斯。“Attributes2Classname：一种基于属性的无监督零样本学习判别模型”。ICCV（2017）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FDemirel_Attributes2Classname_A_Discriminative_ICCV_2017_paper.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fberkandemirel\u002Fattributes2classname)]\n+ **PVE:** 索拉维特·昌皮尼奥、魏伦·赵、费莎。“为零样本学习预测未见类别的视觉示例”。ICCV（2017）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FChangpinyo_Predicting_Visual_Exemplars_ICCV_2017_paper.pdf)][[代码](https:\u002F\u002Fgithub.com\u002Fpujols\u002FZero-shot-learning-journal)]\n+ **LDL:** 江华杰、王瑞平、单世光、杨毅、陈锡林。“为零样本分类学习判别性潜在属性”。ICCV（2017）。[[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_ICCV_2017\u002Fpapers\u002FJiang_Learning_Discriminative_Latent_ICCV_2017_paper.pdf)]\n\n#### CVPR 2017\n\n+ **Deep-SCoRe:** Pedro Morgado、Nuno Vasconcelos。“用于零样本识别的语义一致性正则化”。CVPR（2017）。[[pdf](http:\u002F\u002Fwww.svcl.ucsd.edu\u002F~morgado\u002Fscore\u002Fscore-cvpr17.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Fpedro-morgado\u002Fscore-zeroshot)]\n+ **DEM:** Li Zhang、Tao Xiang、Shaogang Gong。“学习用于零样本学习的深度嵌入模型”。CVPR（2017）。[[pdf](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.05088.pdf)] [[代码](https:\u002F\u002Fgithub.com\u002Flzrobots\u002FDeepEmbeddingModel_ZSL)]\n+ **VDS:** Yang Long、Li Liu、Ling Shao、Fumin Shen、Guiguang Ding、Jungong Han。“从零样本学习到传统监督分类：未见视觉数据合成”。CVPR（2017）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fhtml\u002FLong_From_Zero-Shot_Learning_CVPR_2017_paper.html)]\n+ **ESD:** Zhengming Ding、Ming Shao、Yun Fu。“用于零样本学习的低秩嵌入集成语义词典”。CVPR（2017）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FDing_Low-Rank_Embedded_Ensemble_CVPR_2017_paper.pdf)]\n+ **SAE:** Elyor Kodirov、Tao Xiang、Shaogang Gong。“用于零样本学习的语义自编码器”。CVPR（2017）。[[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FKodirov_Semantic_Autoencoder_for_CVPR_2017_paper.pdf) [[代码](https:\u002F\u002Fgithub.com\u002FElyorcv\u002FSAE)]\n+ **DVSM:** Yanan Li、Donghui Wang、Huanhang Hu、Yuetan Lin、Yueting Zhuang。“使用双重视觉—语义映射路径的零样本识别”。CVPR（2017）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FLi_Zero-Shot_Recognition_Using_CVPR_2017_paper.pdf)]\n+ **MTF-MR:** Xing Xu、Fumin Shen、Yang Yang、Dongxiang Zhang、Heng Tao Shen、Jingkuan Song。“流形正则化的矩阵三因子分解用于零样本学习”。CVPR（2017）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FXu_Matrix_Tri-Factorization_With_CVPR_2017_paper.pdf)]\n+ Nour Karessli、Zeynep Akata、Bernt Schiele、Andreas Bulling。“用于零样本图像分类的眼动嵌入”。CVPR（2017）。[[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.09309.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002FNoura-kr\u002FCVPR17)\n+ **GUB:** Yongqin Xian、Bernt Schiele、Zeynep Akata。“零样本学习——好的、坏的与丑陋的”。CVPR（2017）。\n[[pdf]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FXian_Zero-Shot_Learning_-_CVPR_2017_paper.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002Fpedro-morgado\u002Fscore-zeroshot)\n\n\n#### CVPR 2016\n\n+ **MC-ZSL:** Zeynep Akata、Mateusz Malinowski、Mario Fritz、Bernt Schiele。“具有强监督的多线索零样本学习”。CVPR（2016）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FAkata_Multi-Cue_Zero-Shot_Learning_CVPR_2016_paper.pdf)] [[代码]](https:\u002F\u002Fwww.mpi-inf.mpg.de\u002Findex.php?id=2935)\n+ **LATEM:** Yongqin Xian、Zeynep Akata、Gaurav Sharma、Quynh Nguyen、Matthias Hein、Bernt Schiele。“用于零样本分类的潜在嵌入”。CVPR（2016）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FXian_Latent_Embeddings_for_CVPR_2016_paper.pdf)][[代码](http:\u002F\u002Fdatasets.d2.mpi-inf.mpg.de\u002Fyxian16cvpr\u002FlatEm.zip)]\n+ **LIM:** Ruizhi Qiao、Lingqiao Liu、Chunhua Shen、Anton van den Hengel。“少即是多：通过抑制噪声从在线文本文档中进行零样本学习”。CVPR（2016）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FQiao_Less_Is_More_CVPR_2016_paper.pdf)]\n+ **SYNC:** Soravit Changpinyo、Wei-Lun Chao、Boqing Gong、Fei Sha。“用于零样本学习的合成分类器”。CVPR（2016）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FChangpinyo_Synthesized_Classifiers_for_CVPR_2016_paper.pdf)][[代码](https:\u002F\u002Fgithub.com\u002Fpujols\u002Fzero-shot-learning)]\n+ **RML:** Ziad Al-Halah、Makarand Tapaswi、Rainer Stiefelhagen。“找回缺失的环节：预测类—属性关联以实现无监督零样本学习”。CVPR（2016）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FAl-Halah_Recovering_the_Missing_CVPR_2016_paper.pdf)]\n+ **SLE:** Ziming Zhang、Venkatesh Saligrama。“通过联合潜在相似性嵌入进行零样本学习”。CVPR（2016）。[[pdf](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2016\u002Fpapers\u002FZhang_Zero-Shot_Learning_via_CVPR_2016_paper.pdf)] [[代码](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1RimUgUlf2tfpntzlxdlYaAvm34HX0fUb\u002Fview?usp=sharing)]\n\n\n#### ECCV 2016\n+ Wei-Lun Chao、Soravit Changpinyo、Boqing Gong2、Fei Sha。“针对野外目标识别的广义零样本学习的实证研究与分析”。ECCV（2016）。[[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1605.04253.pdf)\n+ **MTE:** Xun Xu、Timothy M. Hospedales、Shaogang Gong。“具有优先级数据增强的多任务零样本动作识别”。ECCV（2016）。[[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.08663.pdf)\n+ Ziming Zhang、Venkatesh Saligrama。“通过结构化预测进行零样本识别”。ECCV（2016）。[[pdf]](https:\u002F\u002Fpdfs.semanticscholar.org\u002Fbe96\u002F1637db8561b027fd48788c64e7919c7cd760.pdf)\n+ Maxime Bucher、Stephane Herbin、Frederic Jurie。“通过度量学习提高语义嵌入一致性，用于零样本分类”。ECCV（2016）。[[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1607.08085.pdf)\n\n#### AAAI 2016\n+ **RKT:** Donghui Wang、Yanan Li、Yuetan Lin、Yueting Zhuang。“零样本学习中的关系知识迁移”。AAAI（2016）。[[pdf]](https:\u002F\u002Fwww.aaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI16\u002Fpaper\u002Fview\u002F11802\u002F11854)\n\n#### TPAMI 2016、2015、2013\n+ **ALE:** Zeynep Akata、Florent Perronnin、Zaid Harchaoui和Cordelia Schmid。“用于图像分类的标签嵌入”。TPAMI（2016）。[[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1503.08677.pdf)\n+ **TMV:** Yanwei Fu、Timothy M. Hospedales、Tao Xiang、Shaogang Gong。“转导式多视图零样本学习”。TPAMI（2015）[[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1501.04560.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002Fyanweifu\u002Fembedding_zero-shot-learning)\n+ **DAP:** Christoph H. Lampert、Hannes Nickisch和Stefan Harmeling。“基于属性的分类用于零样本视觉对象分类”。TPAMI（2013）[[pdf]](http:\u002F\u002Fpub.ist.ac.at\u002F~chl\u002Fpapers\u002Flampert-pami2013.pdf)\n\n#### CVPR 2015\n+ **SJE:** Zeynep Akata、Scott Reed、Daniel Walter、Honglak Lee、Bernt Schiele。“细粒度图像分类中输出嵌入的评估”。CVPR（2015）。[[pdf]](https:\u002F\u002Fwww.mpi-inf.mpg.de\u002Ffileadmin\u002Finf\u002Fd2\u002Fakata\u002F1690.pdf) [[代码]](https:\u002F\u002Fwww.mpi-inf.mpg.de\u002Findex.php?id=2325) \n+ Zhenyong Fu、Tao Xiang、Elyor Kodirov、Shaogang Gong。“基于语义流形距离的零样本目标识别”。CVPR（2015）。[[pdf]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2015\u002Fpapers\u002FFu_Zero-Shot_Object_Recognition_2015_CVPR_paper.pdf)\n\n#### ICCV 2015\n+ **SSE:** 张子明、文卡特什·萨利格拉玛。“基于语义相似性嵌入的零样本学习”。ICCV（2015）。[[pdf]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fpapers\u002FZhang_Zero-Shot_Learning_via_ICCV_2015_paper.pdf) [[代码]](https:\u002F\u002Fzimingzhang.wordpress.com\u002Fsource-code\u002F)\n+ **LRL:** 李欣、郭宇红、戴尔·舒尔曼斯。“基于标签表示学习的半监督零样本分类”。ICCV（2015）。[[pdf]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fpapers\u002FLi_Semi-Supervised_Zero-Shot_Classification_ICCV_2015_paper.pdf)\n+ **UDA:** 埃廖尔·科迪罗夫、汤姆·湘、傅振勇、龚绍刚。“用于零样本学习的无监督域适应”。ICCV（2015）。[[pdf]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fpapers\u002FKodirov_Unsupervised_Domain_Adaptation_ICCV_2015_paper.pdf)\n+ 吉米·雷·巴、凯文·斯韦尔斯基、桑雅·菲德勒、鲁斯兰·萨拉胡丁诺夫。“利用文本描述预测深度零样本卷积神经网络”。ICCV（2015）。[[pdf]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_iccv_2015\u002Fpapers\u002FBa_Predicting_Deep_Zero-Shot_ICCV_2015_paper.pdf)\n\n\n#### NIPS 2014、2013、2009\n+ 迪内什·贾伊拉姆、克里斯汀·格劳曼。“基于不可靠属性的零样本识别” NIPS（2014）[[pdf]](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5290-zero-shot-recognition-with-unreliable-attributes.pdf)\n+ **CMT:** 理查德·索彻、米林德·甘朱、克里斯托弗·D·曼宁、安德鲁·Y·吴。“通过跨模态迁移实现零样本学习” NIPS（2013）[[pdf]](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5027-zero-shot-learning-through-cross-modal-transfer.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002Fmganjoo\u002Fzslearning)\n+ **DeViSE:** 安德烈娅·弗罗姆、格雷格·S·科拉多、乔纳森·施伦斯、萨米·本吉奥、杰弗里·迪恩、马克·奥雷利奥·兰扎托、托马斯·米科洛夫。“DeViSE：一种深度视觉-语义嵌入模型” NIPS（2013）[[pdf]](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5204-devise-a-deep-visual-semantic-embedding-model.pdf)\n+ 马克·帕拉图奇、迪恩·波默劳、杰弗里·辛顿、汤姆·M·米切尔。“基于语义输出编码的零样本学习” NIPS（2009）[[pdf]](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F3650-zero-shot-learning-with-semantic-output-codes.pdf)\n\n#### ECCV 2014\n+ **TMV-BLP:** 傅燕伟、蒂莫西·M·霍斯佩达莱斯、汤姆·湘、傅振勇、龚绍刚。“用于零样本识别与标注的直推式多视图嵌入” ECCV（2014）。[[pdf]](https:\u002F\u002Fwww.eecs.qmul.ac.uk\u002F~txiang\u002Fpublications\u002FFu_et_al_embedding_eccv2014.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002Fyanweifu\u002Fembedding_zero-shot-learning)\n+ 斯塔尼斯瓦夫·安托尔、拉里·齐特尼克、黛薇·帕里克。“基于视觉抽象的零样本学习”。ECCV（2014）。[[pdf]](https:\u002F\u002Fcomputing.ece.vt.edu\u002F~santol\u002Fprojects\u002Fzsl_via_visual_abstraction\u002Feccv2014_zsl_via_visual_abstraction.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002FStanislawAntol\u002Fzsl_via_visual_abstraction) [[项目]](https:\u002F\u002Fcomputing.ece.vt.edu\u002F~santol\u002Fprojects\u002Fzsl_via_visual_abstraction\u002F)\n\n#### CVPR 2013\n+ **ALE:** Z.阿卡塔、F.佩罗宁、Z.哈尔乔伊和C.施密德。“用于基于属性分类的标签嵌入”。CVPR（2013）。[[pdf]](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2013\u002Fpapers\u002FAkata_Label-Embedding_for_Attribute-Based_2013_CVPR_paper.pdf)\n\n#### 其他论文\n+ **EsZSL:** 贝尔纳迪诺·罗梅拉-帕雷德斯、菲利普·H·S·托尔。“一种极其简单的零样本学习方法”。ICML（2015）。[[pdf]](http:\u002F\u002Fproceedings.mlr.press\u002Fv37\u002Fromera-paredes15.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002FMLWave\u002Fextremely-simple-one-shot-learning)\n+ **AEZSL:** “通过类别特定的视觉-语义映射和标签精炼实现零样本学习” IEEE SPS（2018）。[[pdf]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8476580)\n+ **ZSGD:** 赵天成、马克西娜·埃斯肯纳齐。“基于跨领域潜在动作的零样本对话生成” SIGDIAL（2018）。[[pdf]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.04803v1) [[代码]](https:\u002F\u002Fgithub.com\u002Fsnakeztc\u002FNeuralDialog-ZSDG)\n+ 傅燕伟、汤姆·湘、蒋宇刚、薛向阳、列昂尼德·西格尔、龚绍刚“零样本识别的最新进展”。IEEE信号处理杂志。[[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1710.04837.pdf)\n+ 迈克尔·坎普夫迈耶、尹博·陈、梁晓丹、王浩、张宇佳、埃里克·P·邢“重新思考知识图谱传播在零样本学习中的应用” arXiv（2018）。[[pdf]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.11724v2.pdf) [[代码]](https:\u002F\u002Fgithub.com\u002Fcyvius96\u002Fadgpm)\n+ **综述:** 王伟、郑文威、于汉、苗春燕。“零样本学习的综述：设置、方法与应用”。TIST（2019）。[[pdf]](https:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?doid=3306498.3293318)\n\n\n\n\n### 数据集\n+ **LAD:** 大规模属性数据集。类别：230。[[链接]](https:\u002F\u002Fgithub.com\u002FPatrickZH\u002FA-Large-scale-Attribute-Dataset-for-Zero-shot-Learning)\n+ **CUB:** 加州理工学院-加州大学圣地亚哥分校鸟类数据集。类别：200。[[链接]](http:\u002F\u002Fwww.vision.caltech.edu\u002Fvisipedia\u002FCUB-200-2011.html)\n+ **AWA2:** 具有属性的动物数据集。类别：50。[[链接]](https:\u002F\u002Fcvml.ist.ac.at\u002FAwA2\u002F)\n+ **aPY:** Pascal和Yahoo的属性数据集。类别：32[[链接]](http:\u002F\u002Fvision.cs.uiuc.edu\u002Fattributes\u002F)\n+ **花卉数据集:** 有两个数据集，类别分别为17和102。[[链接]](http:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fdata\u002Fflowers\u002F)\n+ **SUN:** 场景属性数据集。类别：717。[[链接]](http:\u002F\u002Fcs.brown.edu\u002F~gmpatter\u002Fsunattributes.html)\n+ **HMDB51** ：大型人体动作数据库。类别：51[[链接]](https:\u002F\u002Fserre-lab.clps.brown.edu\u002Fresource\u002Fhmdb-a-large-human-motion-database\u002F#Downloads)\n+ **UCF101** ：一个真实动作视频的动作图像识别数据集，收集自YouTube。类别：101[[链接]](https:\u002F\u002Fwww.crcv.ucf.edu\u002Fdata\u002FUCF101.php)\n\n### 入门代码\n该仓库包含一个`Demo`文件夹，其中有一个Jupyter Notebook，逐步展示了“一种极其简单的零样本学习方法”。ICML（2015）。这可以作为入门代码，帮助理解零样本学习的基本概念。\n\n\n## 其他资源\n+ https:\u002F\u002Fmedium.com\u002F@alitech_2017\u002Ffrom-zero-to-hero-shaking-up-the-field-of-zero-shot-learning-c43208f71332\n+ https:\u002F\u002Fwww.analyticsindiamag.com\u002Fwhat-is-zero-shot-learning\u002F\n+ https:\u002F\u002Fmedium.com\u002F@cetinsamet\u002Fzero-shot-learning-53080995d45f\n+ https:\u002F\u002Famitness.com\u002F2020\u002F05\u002Fzero-shot-text-classification\u002F\n\n## 许可证\n\n[![CC0](http:\u002F\u002Fmirrors.creativecommons.org\u002Fpresskit\u002Fbuttons\u002F88x31\u002Fsvg\u002Fcc-zero.svg)](https:\u002F\u002Fcreativecommons.org\u002Fpublicdomain\u002Fzero\u002F1.0\u002F)","# awesome-zero-shot-learning 快速上手指南\n\n`awesome-zero-shot-learning` 并非一个可直接安装的单一软件包或库，而是一个**精选资源列表**（Awesome List），汇集了零样本学习（Zero-Shot Learning, ZSL）领域的论文、数据集对比结果、开源代码链接及相关资源。\n\n本指南将指导开发者如何利用该列表快速找到所需的研究成果和代码实现。\n\n## 环境准备\n\n由于该仓库本身不包含核心算法代码，而是指向各个独立的论文项目，因此环境准备取决于你具体选择复现哪一篇论文。通常，大多数现代 ZSL 项目需要以下基础环境：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04\u002F20.04) 或 macOS\n*   **编程语言**: Python 3.6+\n*   **深度学习框架**: PyTorch 或 TensorFlow (具体版本需参考选定论文的 `requirements.txt`)\n*   **依赖管理工具**: `pip` 或 `conda`\n\n**建议前置操作：**\n创建一个独立的虚拟环境以避免依赖冲突。\n```bash\npython -m venv zsl_env\nsource zsl_env\u002Fbin\u002Factivate  # Windows 用户使用: zsl_env\\Scripts\\activate\n```\n\n## 获取资源与安装\n\n### 1. 克隆资源列表\n首先，将该 Awesome 列表克隆到本地以便查阅：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fchichilicious\u002Fawesome-zero-shot-learning.git\ncd awesome-zero-shot-learning\n```\n\n### 2. 选择并安装具体项目\n浏览目录中的 [Papers](#Papers) 部分，找到你感兴趣的方法（例如 `TF-vaegan`, `CADA-VAE`, `GEM-ZSL` 等）。点击对应的 `[[code]]` 链接跳转到原作者的代码仓库。\n\n**通用安装步骤（以大多数 PyTorch 项目为例）：**\n\n假设你选择了 **CE-GZSL** (Contrastive Embedding for Generalized Zero-Shot Learning)：\n\n```bash\n# 克隆具体项目代码\ngit clone https:\u002F\u002Fgithub.com\u002FHanzy1996\u002FCE-GZSL.git\ncd CE-GZSL\n\n# 安装依赖 (国内用户推荐使用清华源加速)\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 如果项目中包含 setup.py，也可执行\n# pip install -e . -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n> **注意**：部分旧项目可能未提供 `requirements.txt`，请根据报错手动安装 `torch`, `numpy`, `scikit-learn` 等常用库。\n\n## 基本使用\n\n由于每个项目的入口脚本不同，以下是基于该列表中常见项目的**通用使用模式**。请以具体项目仓库的 `README.md` 为准。\n\n### 示例：运行一个典型的 ZSL 训练脚本\n\n大多数项目遵循“下载数据 -> 配置路径 -> 运行训练”的流程。\n\n**1. 准备数据集**\nZSL 常用数据集包括 CUB, AWA2, SUN 等。通常需要在项目根目录下创建 `data\u002F` 文件夹并放入处理好的 `.mat` 或 `.pickle` 文件。\n*(具体数据预处理方式请参考对应论文的官方代码说明)*\n\n**2. 修改配置文件**\n检查项目中的配置文件（如 `config.py` 或命令行参数），确保数据路径正确。\n\n**3. 执行训练与评估**\n以下是一个典型的运行命令示例（参考自列表中的 `GEM-ZSL` 或类似项目）：\n\n```bash\n# 进入项目目录\ncd CE-GZSL \n\n# 运行训练脚本 (参数视具体项目而定)\npython main.py --dataset CUB --mode GZSL --gpu_id 0\n\n# 或者运行特定的评估脚本\npython evaluate.py --checkpoint runs\u002Fbest_model.pth\n```\n\n### 如何查找特定任务的代码？\n在本地克隆的 `awesome-zero-shot-learning` 仓库中，你可以直接搜索关键词来定位资源：\n\n*   **生成式方法 (Generative)**: 搜索 \"GAN\", \"VAE\" (例如 `LisGAN`, `CADA-VAE`)\n*   **判别式方法 (Discriminative)**: 搜索 \"Embedding\", \"Classifier\"\n*   **广义零样本学习 (GZSL)**: 大多数 2018 年后的论文都支持 GZSL 模式。\n*   **代码可用性**: 优先选择标记有 `[[code]]` 的条目，这些通常提供了可运行的 PyTorch\u002FTensorFlow 实现。\n\n通过这种方式，你可以利用 `awesome-zero-shot-learning` 作为导航图，快速接入最前沿的零样本学习算法进行开发和研究。","某医疗 AI 初创团队正尝试开发一套能识别罕见皮肤病变的诊断系统，但面临训练数据极度匮乏且无法覆盖所有未知病种的困境。\n\n### 没有 awesome-zero-shot-learning 时\n- 团队需耗费数周在海量学术数据库中盲目检索“零样本学习”相关论文，难以区分哪些算法适合医疗图像场景。\n- 缺乏统一的代码基准，工程师不得不从零复现经典模型，常因缺少关键的预处理脚本或超参数配置而导致实验失败。\n- 面对未见过的病变类别，传统模型直接失效，团队陷入“必须收集并标注新数据才能迭代”的死循环，严重拖慢产品上线进度。\n- 难以获取权威的对比实验数据，无法评估当前方案与业界最先进水平（SOTA）的差距，技术选型全靠猜测。\n\n### 使用 awesome-zero-shot-learning 后\n- 研究人员通过分类清晰的论文列表，迅速锁定了如 TF-vaegan 和 VSC 等专为广义零样本学习设计的顶会算法，大幅缩短调研周期。\n- 直接复用仓库中提供的 Starter Code 和官方实现链接（如 GitHub 源码），将模型搭建时间从数周压缩至几天，并能快速在医疗数据集上验证效果。\n- 利用列表中关于“语义关系”和“视觉结构约束”的前沿资源，成功让模型在未见过特定病变样本的情况下，仅凭属性描述即可实现高精度分类。\n- 参考仓库整理的标准数据集对比结果，团队明确了性能优化方向，快速将模型准确率提升至接近 SOTA 水平。\n\nawesome-zero-shot-learning 通过整合前沿算法与实战资源，帮助团队突破了数据稀缺瓶颈，实现了从“无数据不可训”到“见描述即识别”的技术跨越。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fsbharadwajj_awesome-zero-shot-learning_a051e53c.png","sbharadwajj","Shrisha Bharadwaj","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fsbharadwajj_8c07bc6f.jpg",null,"Max-Planck Institute for Intelligent Systems","Tübingen","https:\u002F\u002Fsbharadwajj.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fsbharadwajj",936,165,"2026-01-24T08:56:26",5,"","未说明",{"notes":90,"python":88,"dependencies":91},"该仓库是一个零样本学习（Zero-Shot Learning）的资源列表（Awesome List），主要收录了相关论文、数据集链接和代码库地址，本身不是一个可直接运行的软件工具，因此 README 中未包含具体的运行环境、依赖库或硬件需求信息。用户需根据列表中链接到的具体项目（如 tfvaegan, CIZSL, GDAN 等）的独立仓库去查询相应的环境配置。",[],[13],[94,95,96,97],"zero-shot-learning","deep-learning","awesome-list","papers","2026-03-27T02:49:30.150509","2026-04-06T07:14:57.663345",[],[]]