[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-yassouali--ML-paper-notes":3,"tool-yassouali--ML-paper-notes":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",147882,2,"2026-04-09T11:32:47",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":10,"last_commit_at":59,"category_tags":60,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":77,"owner_website":78,"owner_url":79,"languages":76,"stars":80,"forks":81,"last_commit_at":82,"license":76,"difficulty_score":83,"env_os":84,"env_gpu":85,"env_ram":85,"env_deps":86,"category_tags":89,"github_topics":90,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":97,"updated_at":98,"faqs":99,"releases":100},5996,"yassouali\u002FML-paper-notes","ML-paper-notes",":notebook: Notes and summaries of various ML, Computer Vision & NLP papers.","ML-paper-notes 是一个专注于机器学习、计算机视觉和自然语言处理领域的开源论文笔记库。它系统性地整理了大量前沿学术文章的核心观点，并将精炼后的总结以 PDF 形式呈现，涵盖了自监督学习、对比学习及半监督学习等多个热门方向。\n\n面对海量且晦涩难懂的 AI 学术论文，研究人员和开发者往往需要耗费大量时间才能提取关键信息。ML-paper-notes 有效解决了这一痛点，通过提供结构清晰、重点突出的导读笔记，帮助用户快速把握论文的创新点与技术细节，大幅降低了阅读门槛和时间成本。\n\n该资源特别适合 AI 领域的研究人员、算法工程师以及希望紧跟技术前沿的学生使用。无论是为了寻找灵感、复现模型，还是进行文献综述，都能从中获得高效支持。其独特亮点在于按主题分类的组织方式，以及从早期经典方法（如图像着色、拼图预测）到最新大模型半监督学习的完整脉络梳理，为理解自监督表示学习的发展历程提供了极佳的参考路径。","\n\n# ML Papers\nThis repo contains notes and short summaries of some ML related papers I come across, organized by subjects and the summaries are in the form of PDFs.\n\n## Self-Supervised & Contrastive Learning\n\n- Self-Supervised Relational Reasoning for Representation Learning (2020): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.05849) [[Notes]](notes\u002F103_SSL_relation_reasoning.pdf)\n- Big Self-Supervised Models are Strong Semi-Supervised Learners (2020) [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.10029) [[Notes]](notes\u002F95_big_self-supervised_models.pdf)\n- Debiased Contrastive Learning (2020) [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.00224) [[Notes]](notes\u002F97_debiased_contrastive_learning.pdf)\n- Selfie: Self-supervised Pretraining for Image Embedding (2019): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.02940) [[Notes]](notes\u002F76_selfie_pretraining_for_img_embeddings.pdf)\n- Self-Supervised Representation Learning by Rotation Feature Decoupling (2019): [[Paper]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FFeng_Self-Supervised_Representation_Learning_by_Rotation_Feature_Decoupling_CVPR_2019_paper.pdf) [[Notes]](notes\u002F73_SSL_by_rotation_decoupling.pdf)\n- Revisiting Self-Supervised Visual Representation Learning (2019): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.09005) [[Notes]](notes\u002F72_revisiting_SSL.pdf)\n- AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transformations (2019): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.04596) [[Notes]](notes\u002F74_AFT_vs_AED.pdf)\n- Boosting Self-Supervised Learning via Knowledge Transfer (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.00385) [[Notes]](notes\u002F67_boosting_self_super_via_trsf_learning.pdf)\n- Self-Supervised Feature Learning by Learning to Spot Artifacts (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.05024) [[Notes]](notes\u002F69_SSL_by_learn_to_spot_artifacts.pdf)\n- Unsupervised Representation Learning by Predicting Image Rotations (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.07728) [[Notes]](notes\u002F68_unsup_img_rep_learn_by_rot_predic.pdf)\n- Cross Pixel Optical-Flow Similarity for Self-Supervised Learning (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.05636) [[Notes]](notes\u002F75_cross_pixel_optical_flow.pdf)\n- Multi-task Self-Supervised Visual Learning (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.07860) [[Notes]](notes\u002F64_multi_task_self_supervised.pdf)\n- Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.09842) [[Notes]](notes\u002F65_split_brain_autoencoders.pdf)\n- Colorization as a Proxy Task for Visual Understanding (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.04044) [[Notes]](notes\u002F66_colorization_as_a_proxy_for_viz_under.pdf)\n- Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.09246) [[Notes]](notes\u002F63_solving_jigsaw_puzzles.pdf)\n- Unsupervised Visual Representation Learning by Context Prediction (2016): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1505.05192) [[Notes]](notes\u002F62_unsupervised_learning_with_context_prediction.pdf)\n- Colorful image colorization (2016): [[Paper]](https:\u002F\u002Frichzhang.github.io\u002Fcolorization\u002F) [[Notes]](notes\u002F59_colorful_colorization.pdf)\n- Learning visual groups from co-occurrences in space and time (2015): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06811) [[Notes]](notes\u002F61_visual_groups_from_co_occurrences.pdf)\n- Discriminative unsupervised feature learning with exemplar convolutional neural networks (2015): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1406.6909) [[Notes]](notes\u002F60_exemplar_CNNs.pdf)\n\n## Semi-Supervised Learning\n\n- Negative sampling in semi-supervised learning (2020): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05166) [[Notes]](notes\u002F94_nagative_sampling_SSL.pdf)\n- Time-Consistent Self-Supervision for Semi-Supervised Learning (2020): [[Paper]](https:\u002F\u002Fproceedings.icml.cc\u002Fstatic\u002Fpaper_files\u002Ficml\u002F2020\u002F5578-Paper.pdf) [[Notes]](notes\u002F93_time_consistent_SSL.pdf)\n- Dual Student: Breaking the Limits of the Teacher in Semi-supervised Learning (2019): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.01804) [[Notes]](notes\u002F79_dual_student.pdf)\n- S4L: Self-Supervised Semi-Supervised Learning (2019): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.03670) [[Notes]](notes\u002F83_S4L.pdf)\n- Semi-Supervised Learning by Augmented Distribution Alignment (2019): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.08171) [[Notes]](notes\u002F80_SSL_aug_dist_align.pdf)\n- MixMatch: A Holistic Approach toSemi-Supervised Learning (2019): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.02249) [[Notes]](notes\u002F45_mixmatch.pdf)\n- Unsupervised Data Augmentation (2019): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.12848) [[Notes]](notes\u002F39_unsupervised_data_aug.pdf)\n- Interpolation Consistency Training for Semi-Supervised Learning (2019): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.03825) [[Notes]](notes\u002F44_interpolation_consistency_tranining.pdf)\n- Deep Co-Training for Semi-Supervised Image Recognition (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.05984) [[Notes]](notes\u002F46_deep_co_training_img_rec.pdf)\n- Unifying semi-supervised and robust learning by mixup (2019): [[Paper]](https:\u002F\u002Fopenreview.net\u002Fforum?id=r1gp1jRN_4) [[Notes]](notes\u002F42_mixmixup.pdf)\n- Realistic Evaluation of Deep Semi-Supervised Learning Algorithms (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.09170) [[Notes]](notes\u002F37_realistic_eval_of_deep_ss.pdf)\n- Semi-Supervised Sequence Modeling with Cross-View Training (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.08370) [[Notes]](notes\u002F38_cross_view_semi_supervised.pdf)\n- Virtual Adversarial Training (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.03976) [[Notes]](notes\u002F40_virtual_adversarial_training.pdf)\n- Mean teachers are better role models (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01780) [[Notes]](notes\u002F56_mean_teachers.pdf)\n- Temporal Ensembling for Semi-Supervised Learning (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.02242) [[Notes]](notes\u002F55_temporal-ensambling.pdf)\n- Semi-Supervised Learning with Ladder Networks (2015): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1507.02672) [[Notes]](notes\u002F33_ladder_nets.pdf)\n\n## Video Understanding\n\n\n\n- Multiscale Vision Transformers (2021): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.11227) [[Notes]](notes\u002FMultiscale_Vision_Transformers.pdf)\n- ViViT A Video Vision Transformer (2021): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.15691) [[Notes]](notes\u002FViViT_A_Video_Vision_Transformer.pdf)\n- Space-time Mixing Attention for Video Transformer (2021): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.05968) [[Notes]](notes\u002FSpace-time_Mixing_Attention_for_Video_Transformer.pdf)\n- Is Space-Time Attention All You Need for Video Understanding (2021): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.05095) [[Notes]](notes\u002FIs_Space-Time_Attention_All_You_Need_for_Video_Understanding.pdf)\n- An Image is Worth 16x16 Words What is a Video Worth (2021): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.13915) [[Notes]](notes\u002FAn_Image_is_Worth_16x16_Words_What_is_a_Video_Worth.pdf)\n- Temporal Query Networks for Fine-grained Video Understanding (2021): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.09496) [[Notes]](notes\u002FTemporal_Query_Networks_for_Fine-grained_Video_Understanding.pdf)\n- X3D Expanding Architectures for Efficient Video Recognition (2020): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.04730) [[Notes]](notes\u002FX3D_Expanding_Architectures_for_Efficient_Video_Recognition.pdf)\n- Temporal Pyramid Network for Action Recognition (2020): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.03548) [[Notes]](notes\u002FTemporal_Pyramid_Network_for_Action_Recognition.pdf)\n- STM SpatioTemporal and Motion Encoding for Action Recognition (2019): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.02486) [[Notes]](notes\u002FSTM_SpatioTemporal_and_Motion_Encoding_for_Action_Recognition.pdf)\n- Video Classification with Channel-Separated Convolutional Networks (2019): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.02811) [[Notes]](notes\u002FVideo_Classification_with_Channel-Separated_Convolutional_Networks.pdf)\n- Video Modeling with Correlation Networks (2019): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.03349) [[Notes]](notes\u002FVideo_Modeling_with_Correlation_Networks.pdf)\n- Videos as Space-Time Region Graphs (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.01810) [[Notes]](notes\u002FVideos_as_Space-Time_Region_Graphs.pdf)\n- SlowFast Networks for Video Recognition (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.03982) [[Notes]](notes\u002FSlowFast_Networks_for_Video_Recognition.pdf)\n- TSM Temporal Shift Module for Efficient Video Understanding (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.08383) [[Notes]](notes\u002FTSM_Temporal_Shift_Module_for_Efficient_Video_Understanding.pdf)\n- Timeception for Complex Action Recognition (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.01289) [[Notes]](notes\u002FTimeception_for_Complex_Action_Recognition.pdf)\n- Non-local Neural Networks (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.07971) [[Notes]](notes\u002FNon-local_Neural_Networks.pdf)\n- Temporal Segment Networks for Action Recognition in Videos. (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.02953) [[Notes]](notes\u002FTemporal_Segment_Networks_for_Action_Recognition_in_Videos..pdf)\n- Quo Vadis Action Recognition A New Model and the Kinetics Dataset (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07750) [[Notes]](notes\u002FQuo_Vadis_Action_Recognition_A_New_Model_and_the_Kinetics_Dataset.pdf)\n- A Closer Look at Spatiotemporal Convolutions for Action Recognition (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.11248) [[Notes]](notes\u002FA_Closer_Look_at_Spatiotemporal_Convolutions_for_Action_Recognition.pdf)\n- ActionVLAD Learning spatio-temporal aggregation for action classification (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.02895) [[Notes]](notes\u002FActionVLAD_Learning_spatio-temporal_aggregation_for_action_classification.pdf)\n- Spatiotemporal Residual Networks for Video Action Recognition (2016): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.02155) [[Notes]](notes\u002FSpatiotemporal_Residual_Networks_for_Video_Action_Recognition.pdf)\n- Deep Temporal Linear Encoding Networks (2016): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.06678) [[Notes]](notes\u002FDeep_Temporal_Linear_Encoding_Networks.pdf)\n- Temporal Convolutional Networks for Action Segmentation and Detection (2016): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.05267) [[Notes]](notes\u002FTemporal_Convolutional_Networks_for_Action_Segmentation_and_Detection.pdf)\n- Learning Spatiotemporal Features with 3D Convolutional Network (2014): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1412.0767) [[Notes]](notes\u002FLearning_Spatiotemporal_Features_with_3D_Convolutional_Network.pdf)\n\n\n## Domain Adaptation, Domain & Out-of-Distribution Generalization\n\n- Rethinking Distributional Matching Based Domain Adaptation (2020): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.13352) [[Notes]](notes\u002F98_rethinking_distributional_matching.pdf)\n- Transferability vs. Discriminability: Batch Spectral Penalization (2019): [[Paper]](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fchen19i.html) [[Notes]](notes\u002F91_batch_spectral_normalization.pdf)\n- On Learning Invariant Representations for Domain Adaptation (2019): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.09453) [[Notes]](notes\u002F90_on_learning_invariant_repr.pdf)\n- Universal Domain Adaptation (2019): [[Paper]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FYou_Universal_Domain_Adaptation_CVPR_2019_paper.pdf) [[Notes]](notes\u002F87_Universal_DA.pdf)\n- Transferable Adversarial Training (2019): [[Paper]](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fliu19b\u002Fliu19b.pdf) [[Notes]](notes\u002F86_TDT.pdf)\n- Multi-Adversarial Domain Adaptation (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.02176) [[Notes]](notes\u002F92_multi_adversarial_domain_adaptation.pdf)\n- Conditional Adversarial Domain Adaptation (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.10667) [[Notes]](notes\u002F85_CDAN.pdf)\n- Learning Adversarially Fair and Transferable Representations (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.06309) [[Notes]](notes\u002F88_learning_adv_fair_and_tsf_repres.pdf)\n- What is the Effect of Importance Weighting in Deep Learning? (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.03372) [[Notes]](notes\u002F89_effect_of_importance_weighting.pdf)\n\n## Explainability\n\n- Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU Models (2021): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06922) [[Notes]](notes\u002FTowards_Interpreting_and_Mitigating_Shortcut_Learning_Behavior_of_NLU_Models.pdf)\n- Transformer Interpretability Beyond Attention Visualization (2020): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.09838) [[Notes]](notes\u002FTransformer_Interpretability_Beyond_Attention_Visualization.pdf)\n- What shapes feature representations Exploring datasets architectures and training (2020): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12433) [[Notes]](notes\u002FWhat_shapes_feature_representations_Exploring_datasets_architectures_and_training.pdf)\n- Attention-based Dropout Layer for Weakly Supervised Object Localization (2019): [[Paper]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FChoe_Attention-Based_Dropout_Layer_for_Weakly_Supervised_Object_Localization_CVPR_2019_paper.pdf) [[Notes]](notes\u002F58_attention_based_dropout.pdf)\n- Attention is not Explanation (2019): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.10186) [[Notes]](notes\u002FAttention_is_not_Explanation.pdf)\n- SmoothGrad removing noise by adding noise (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.03825) [[Notes]](notes\u002FSmoothGrad_removing_noise_by_adding_noise.pdf)\n- Axiomatic Attribution for Deep Networks (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01365) [[Notes]](notes\u002FAxiomatic_Attribution_for_Deep_Networks.pdf)\n- Attention Branch Network: Learning of Attention Mechanism for Visual Explanation (2019): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.10025) [[Notes]](notes\u002F57_attention_branch_netwrok.pdf)\n- Paying More Attention to Attention: Improving the Performance of CNNs via Attention Transfer (2016): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.03928) [[Notes]](notes\u002F71_attention_transfer.pdf)\n\n## Natural Language Processing (NLP)\n\n- Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing (2021): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.13586) [[Notes]](notes\u002FPre-train,_Prompt,_and_Predict:_A_Systematic_Survey_of_Prompting_Methods_in_Natural_Language_Processing.pdf)\n- Unsupervised Data Augmentation with Naive Augmentation and without Unlabeled Data (2020): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11966) [[Notes]](notes\u002F102_uda_with_naive_aug.pdf)\n- Supervised Contrastive Learning for Pre-trained Language Model Fine-tuning (2021): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.01403) [[Notes]](notes\u002F104_supervised_const_for_fine_tuning.pdf)\n- BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning (2020): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.02671) [[Notes]](notes\u002F99_bert_and_pals.pdf)\n- FreeLB: Enhanced Adversarial Training for Natural Language Understanding (2020): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.11764) [[Notes]](notes\u002F101_freeLB.pdf)\n- MixText: Linguistically-Informed Interpolation for Semi-Supervised Text Classification (2020): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.12239) [[Notes]](notes\u002F100_mixtext.pdf)\n\n## Generative Modeling\n\n- Generative Pretraining from Pixels (2020): [[Paper]](https:\u002F\u002Fcdn.openai.com\u002Fpapers\u002FGenerative_Pretraining_from_Pixels_V2.pdf) [[Notes]](notes\u002F96_generative_pretraining_from_pixels.pdf)\n- Consistency Regularization for Generative Adversarial Networks (2020): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.12027) [[Notes]](notes\u002F84_CR_GANs.pdf)\n\n## Unsupervised Learning\n- Invariant Information Clustering for Unsupervised Image Classification and Segmentation (2019): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.06653) [[Notes]](notes\u002F78_IIC.pdf)\n- Deep Clustering for Unsupervised Learning of Visual Feature (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.05520) [[Notes]](notes\u002F70_deep_clustering_for_un_visual_features.pdf)\n\n## Semantic Segmentation\n- DeepLabv3+: Encoder-Decoder with Atrous Separable Convolution (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.02611) [[Notes]](notes\u002F26_deeplabv3+.pdf)\n- Large Kernel Matter, Improve Semantic Segmentation by Global Convolutional Network (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.02719) [[Notes]](notes\u002F28_large_kernel_maters.pdf)\n- Understanding Convolution for Semantic Segmentation (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.08502) [[Notes]](notes\u002F29_understanding_conv_for_sem_seg.pdf)\n- Rethinking Atrous Convolution for Semantic Image Segmentation (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.05587) [[Notes]](notes\u002F25_deeplab_v3.pdf)\n- RefineNet: Multi-path refinement networks for high-resolution semantic segmentation (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.06612) [[Notes]](notes\u002F31_refinenet.pdf)\n- Pyramid Scene Parsing Network (2017): [[Paper]](http:\u002F\u002Fjiaya.me\u002Fpapers\u002FPSPNet_cvpr17.pdf) [[Notes]](notes\u002F22_pspnet.pdf)\n- SegNet: A Deep ConvolutionalEncoder-Decoder Architecture for ImageSegmentation (2016): [[Paper]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1511.00561) [[Notes]](notes\u002F21_segnet.pdf)\n- ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation (2016): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.02147) [[Notes]](notes\u002F27_enet.pdf)\n- Attention to Scale: Scale-aware Semantic Image Segmentation (2016): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.03339) [[Notes]](notes\u002F30_atttention_to_scale.pdf)\n- Deeplab: semantic image segmentation with DCNN, atrous convs and CRFs (2016): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.00915) [[Notes]](notes\u002F23_deeplab_v2.pdf)\n- U-Net: Convolutional Networks for Biomedical Image Segmentation (2015): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1505.04597) [[Notes]](notes\u002F20_Unet.pdf)\n- Fully Convolutional Networks for Semantic Segmentation (2015): [[Paper]](https:\u002F\u002Fpeople.eecs.berkeley.edu\u002F~jonlong\u002Flong_shelhamer_fcn.pdf) [[Notes]](notes\u002F19_FCN.pdf)\n- Hypercolumns for object segmentation and fine-grained localization (2015): [[Paper]](http:\u002F\u002Fhome.bharathh.info\u002Fpubs\u002Fpdfs\u002FBharathCVPR2015.pdf) [[Notes]](notes\u002F24_hypercolumns.pdf)\n\n\n## Weakly- and Semi-supervised Semantic segmentation\n- Box-driven Class-wise Region Masking and Filling Rate Guided Loss (2019): [[Paper]](http:\u002F\u002Farxiv.org\u002Fabs\u002F1904.11693) [[Notes]](notes\u002F54_boxe_driven_weakly_segmentation.pdf)\n- FickleNet: Weakly and Semi-supervised Semantic Segmentation using Stochastic Inference (2019): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.10421) [[Notes]](notes\u002F49_ficklenet.pdf)\n- Weakly-Supervised Semantic Segmentation Network with Deep Seeded Region Growing (2018): [[Paper]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHuang_Weakly-Supervised_Semantic_Segmentation_CVPR_2018_paper.pdf) [[Notes]](notes\u002F53_deep_seeded_region_growing.pdf)\n- Learning Pixel-level Semantic Affinity with Image-level Supervision (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.10464) [[Notes]](notes\u002F81_affinity_for_ws_segmentation.pdf)\n- Object Region Mining with Adversarial Erasing (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.08448) [[Notes]](notes\u002F51_object_region_manning_for_sem_seg.pdf)\n- Revisiting Dilated Convolution: A Simple Approach for Weakly- and Semi- Supervised Segmentation (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.04574) [[Notes]](notes\u002F52_dilates_convolution_semi_super_segmentation.pdf)\n- Tell Me Where to Look: Guided Attention Inference Network (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.10171) [[Notes]](notes\u002F50_tell_me_where_to_look.pdf)\n- Semi Supervised Semantic Segmentation Using Generative Adversarial Network (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.09695) [[Notes]](notes\u002F82_ss_segmentation_gans.pdf)\n- Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation (2015): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.04924) [[Notes]](notes\u002F47_decoupled_nn_for_segmentation.pdf)\n- Weakly- and Semi-Supervised Learning of a DCNN for Semantic Image Segmentation (2015): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1502.02734) [[Notes]](notes\u002F48_weakly_and_ss_for_segmentation.pdf)\n\n\n## Information Retrieval\n- VSE++: Improving Visual-Semantic Embeddings with Hard Negatives (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.05612) [[Notes]](notes\u002F77_vse++.pdf)\n\n\n## Graph Neural Network\n- Pixels to Graphs by Associative Embedding (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.07365) [[Notes]](notes\u002F36_pixels_to_graphs.pdf)\n- Associative Embedding: End-to-End Learning forJoint Detection and Grouping (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.05424) [[Notes]](notes\u002F35_associative_emb.pdf)\n- Interaction Networks for Learning about Objects , Relations and Physics (2016): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.00222) [[Notes]](notes\u002F18_interaction_nets.pdf)\n- DeepWalk: Online Learning of Social Representation (2014): [[Paper]](http:\u002F\u002Fwww.perozzi.net\u002Fpublications\u002F14_kdd_deepwalk.pdf) [[Notes]](notes\u002Fdeep_walk.pdf)\n- The graph neural network model (2009): [[Paper]](https:\u002F\u002Fpersagen.com\u002Ffiles\u002Fmisc\u002Fscarselli2009graph.pdf) [[Notes]](notes\u002Fgraph_neural_nets.pdf)\n\n## Regularization\n- Manifold Mixup: Better Representations by Interpolating Hidden States (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.05236) [[Notes]](notes\u002F43_manifold_mixup.pdf)\n\n## Deep learning Methods & Models\n- AutoAugment (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.09501) [[Notes]](notes\u002F41_autoaugment.pdf)\n- Stacked Hourgloass (2017): [[Paper]](http:\u002F\u002Fismir2018.ircam.fr\u002Fdoc\u002Fpdfs\u002F138_Paper.pdf) [[Notes]](notes\u002F34_stacked_hourglass.pdf)\n\n\n## Document analysis and segmentation\n- dhSegment: A generic deep-learning approach for document segmentation (2018): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.10371) [[Notes]](notes\u002FdhSegement.pdf)\n- Learning to extract semantic structure from documents using multimodal fully convolutional neural networks (2017): [[Paper]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.02337) [[Notes]](notes\u002Flearning_to_extract.pdf)\n- Page Segmentation for Historical Handwritten Document Images Using Conditional Random Fields (2016): [[Paper]](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F312486501_Page_Segmentation_for_Historical_Handwritten_Document_Images_Using_Conditional_Random_Fields) [[Notes]](notes\u002Fseg_with_CRFs.pdf)\n- ICDAR 2015 competition on text line detection in historical documents (2015): [[Paper]](http:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F7333945\u002F) [[Notes]](notes\u002FICDAR2015.pdf)\n- Handwritten text line segmentation using Fully Convolutional Network (2017): [[Paper]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8270267\u002F) [[Notes]](notes\u002Fhandwritten_text_seg_FCN.pdf)\n- Deep Neural Networks for Large Vocabulary Handwritten Text Recognition (2015): [[Paper]](https:\u002F\u002Ftel.archives-ouvertes.fr\u002Ftel-01249405\u002Fdocument) [[Notes]](notes\u002Fandwriten_text_recognition.pdf)\n- Page Segmentation of Historical Document Images with Convolutional Autoencoders (2015): [[Paper]](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F7333914\u002F) [[Notes]](notes\u002Fsegmentation_with_CAE.pdf)\n","# 机器学习论文\n\n此仓库包含我对一些机器学习相关论文的笔记和简要总结，按主题分类，总结以PDF格式提供。\n\n## 自监督与对比学习\n\n- 自监督关系推理用于表示学习（2020）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.05849) [[笔记]](notes\u002F103_SSL_relation_reasoning.pdf)\n- 大型自监督模型是强大的半监督学习者（2020）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.10029) [[笔记]](notes\u002F95_big_self-supervised_models.pdf)\n- 去偏对比学习（2020）[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.00224) [[笔记]](notes\u002F97_debiased_contrastive_learning.pdf)\n- Selfie：用于图像嵌入的自监督预训练（2019）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.02940) [[笔记]](notes\u002F76_selfie_pretraining_for_img_embeddings.pdf)\n- 通过旋转特征解耦进行自监督表示学习（2019）：[[论文]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FFeng_Self-Supervised_Representation_Learning_by_Rotation_Feature_Decoupling_CVPR_2019_paper.pdf) [[笔记]](notes\u002F73_SSL_by_rotation_decoupling.pdf)\n- 重新审视自监督视觉表示学习（2019）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.09005) [[笔记]](notes\u002F72_revisiting_SSL.pdf)\n- AET vs. AED：通过自动编码变换进行无监督表示学习（2019）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.04596) [[笔记]](notes\u002F74_AFT_vs_AED.pdf)\n- 通过知识迁移提升自监督学习（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.00385) [[笔记]](notes\u002F67_boosting_self_super_via_trsf_learning.pdf)\n- 通过学习识别伪影进行自监督特征学习（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.05024) [[笔记]](notes\u002F69_SSL_by_learn_to_spot_artifacts.pdf)\n- 通过预测图像旋转进行无监督表示学习（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.07728) [[笔记]](notes\u002F68_unsup_img_rep_learn_by_rot_predic.pdf)\n- 用于自监督学习的跨像素光流相似性（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.05636) [[笔记]](notes\u002F75_cross_pixel_optical_flow.pdf)\n- 多任务自监督视觉学习（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.07860) [[笔记]](notes\u002F64_multi_task_self_supervised.pdf)\n- 分裂脑自编码器：通过跨通道预测进行无监督学习（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.09842) [[笔记]](notes\u002F65_split_brain_autoencoders.pdf)\n- 作为视觉理解代理任务的彩色化（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.04044) [[笔记]](notes\u002F66_colorization_as_a_proxy_for_viz_under.pdf)\n- 通过拼图游戏进行视觉表示的无监督学习（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.09246) [[笔记]](notes\u002F63_solving_jigsaw_puzzles.pdf)\n- 通过上下文预测进行无监督视觉表示学习（2016）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1505.05192) [[笔记]](notes\u002F62_unsupervised_learning_with_context_prediction.pdf)\n- 彩色图像着色（2016）：[[论文]](https:\u002F\u002Frichzhang.github.io\u002Fcolorization\u002F) [[笔记]](notes\u002F59_colorful_colorization.pdf)\n- 从时空共现中学习视觉群体（2015）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06811) [[笔记]](notes\u002F61_visual_groups_from_co_occurrences.pdf)\n- 使用示例卷积神经网络进行判别式无监督特征学习（2015）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1406.6909) [[笔记]](notes\u002F60_exemplar_CNNs.pdf)\n\n## 半监督学习\n\n- 半监督学习中的负采样（2020）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.05166) [[笔记]](notes\u002F94_nagative_sampling_SSL.pdf)\n- 用于半监督学习的时间一致性自监督（2020）：[[论文]](https:\u002F\u002Fproceedings.icml.cc\u002Fstatic\u002Fpaper_files\u002Ficml\u002F2020\u002F5578-Paper.pdf) [[笔记]](notes\u002F93_time_consistent_SSL.pdf)\n- 双学生：突破半监督学习中教师的限制（2019）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.01804) [[笔记]](notes\u002F79_dual_student.pdf)\n- S4L：自监督半监督学习（2019）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.03670) [[笔记]](notes\u002F83_S4L.pdf)\n- 通过增强分布对齐进行半监督学习（2019）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.08171) [[笔记]](notes\u002F80_SSL_aug_dist_align.pdf)\n- MixMatch：一种全面的半监督学习方法（2019）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.02249) [[笔记]](notes\u002F45_mixmatch.pdf)\n- 无监督数据增强（2019）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.12848) [[笔记]](notes\u002F39_unsupervised_data_aug.pdf)\n- 用于半监督学习的插值一致性训练（2019）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.03825) [[笔记]](notes\u002F44_interpolation_consistency_tranining.pdf)\n- 用于半监督图像识别的深度协同训练（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.05984) [[笔记]](notes\u002F46_deep_co_training_img_rec.pdf)\n- 通过mixup统一半监督和鲁棒学习（2019）：[[论文]](https:\u002F\u002Fopenreview.net\u002Fforum?id=r1gp1jRN_4) [[笔记]](notes\u002F42_mixmixup.pdf)\n- 深度半监督学习算法的真实评估（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.09170) [[笔记]](notes\u002F37_realistic_eval_of_deep_ss.pdf)\n- 基于跨视图训练的半监督序列建模（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.08370) [[笔记]](notes\u002F38_cross_view_semi_supervised.pdf)\n- 虚拟对抗训练（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.03976) [[笔记]](notes\u002F40_virtual_adversarial_training.pdf)\n- 平均教师是更好的榜样（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01780) [[笔记]](notes\u002F56_mean_teachers.pdf)\n- 用于半监督学习的时序集成（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.02242) [[笔记]](notes\u002F55_temporal-ensambling.pdf)\n- 使用梯子网络进行半监督学习（2015）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1507.02672) [[笔记]](notes\u002F33_ladder_nets.pdf)\n\n## 视频理解\n\n\n\n- 多尺度视觉Transformer（2021）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.11227) [[笔记]](notes\u002FMultiscale_Vision_Transformers.pdf)\n- ViViT：一种视频视觉Transformer（2021）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.15691) [[笔记]](notes\u002FViViT_A_Video_Vision_Transformer.pdf)\n- 用于视频Transformer的时空混合注意力机制（2021）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.05968) [[笔记]](notes\u002FSpace-time_Mixing_Attention_for_Video_Transformer.pdf)\n- 对于视频理解，时空注意力就足够了吗？（2021）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.05095) [[笔记]](notes\u002FIs_Space-Time_Attention_All_You_Need_for_Video_Understanding.pdf)\n- 一张图像胜过16×16个词，那么一段视频又值多少呢？（2021）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.13915) [[笔记]](notes\u002FAn_Image_is_Worth_16x16_Words_What_is_a_Video_Worth.pdf)\n- 用于细粒度视频理解的时序查询网络（2021）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.09496) [[笔记]](notes\u002FTemporal_Query_Networks_for_Fine-grained_Video_Understanding.pdf)\n- X3D：扩展架构以实现高效的视频识别（2020）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.04730) [[笔记]](notes\u002FX3D_Expanding_Architectures_for_Efficient_Video_Recognition.pdf)\n- 用于动作识别的时序金字塔网络（2020）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.03548) [[笔记]](notes\u002FTemporal_Pyramid_Network_for_Action_Recognition.pdf)\n- STM：用于动作识别的时空与运动编码（2019）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1908.02486) [[笔记]](notes\u002FSTM_SpatioTemporal_and_Motion_Encoding_for_Action_Recognition.pdf)\n- 基于通道分离卷积网络的视频分类（2019）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.02811) [[笔记]](notes\u002FVideo_Classification_with_Channel-Separated_Convolutional_Networks.pdf)\n- 基于相关性网络的视频建模（2019）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.03349) [[笔记]](notes\u002FVideo_Modeling_with_Correlation_Networks.pdf)\n- 视频作为时空区域图（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.01810) [[笔记]](notes\u002FVideos_as_Space-Time_Region_Graphs.pdf)\n- SlowFast网络用于视频识别（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.03982) [[笔记]](notes\u002FSlowFast_Networks_for_Video_Recognition.pdf)\n- TSM：用于高效视频理解的时序移位模块（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.08383) [[笔记]](notes\u002FTSM_Temporal_Shift_Module_for_Efficient_Video_Understanding.pdf)\n- Timeception用于复杂动作识别（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.01289) [[笔记]](notes\u002FTimeception_for_Complex_Action_Recognition.pdf)\n- 非局部神经网络（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.07971) [[笔记]](notes\u002FNon-local_Neural_Networks.pdf)\n- 用于视频中动作识别的时序片段网络（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.02953) [[笔记]](notes\u002FTemporal_Segment_Networks_for_Action_Recognition_in_Videos..pdf)\n- Quo Vadis动作识别：一种新模型及Kinetics数据集（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.07750) [[笔记]](notes\u002FQuo_Vadis_Action_Recognition_A_New_Model_and_the_Kinetics_Dataset.pdf)\n- 更深入地研究用于动作识别的时空卷积（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.11248) [[笔记]](notes\u002FA_Closer_Look_at_Spatiotemporal_Convolutions_for_Action_Recognition.pdf)\n- ActionVLAD：学习时空聚合以进行动作分类（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.02895) [[笔记]](notes\u002FActionVLAD_Learning_spatio-temporal_aggregation_for_action_classification.pdf)\n- 用于视频动作识别的时空残差网络（2016）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.02155) [[笔记]](notes\u002FSpatiotemporal_Residual_Networks_for_Video_Action_Recognition.pdf)\n- 深度时序线性编码网络（2016）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.06678) [[笔记]](notes\u002FDeep_Temporal_Linear_Encoding_Networks.pdf)\n- 用于动作分割与检测的时序卷积网络（2016）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.05267) [[笔记]](notes\u002FTemporal_Convolutional_Networks_for_Action_Segmentation_and_Detection.pdf)\n- 使用3D卷积网络学习时空特征（2014）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1412.0767) [[笔记]](notes\u002FLearning_Spatiotemporal_Features_with_3D_Convolutional_Network.pdf)\n\n\n## 领域适应、领域泛化与分布外泛化\n\n- 重新思考基于分布匹配的领域适应（2020）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.13352) [[笔记]](notes\u002F98_rethinking_distributional_matching.pdf)\n- 可迁移性 vs. 可区分性：批谱正则化（2019）：[[论文]](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fchen19i.html) [[笔记]](notes\u002F91_batch_spectral_normalization.pdf)\n- 关于学习领域适应的不变表示（2019）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.09453) [[笔记]](notes\u002F90_on_learning_invariant_repr.pdf)\n- 通用领域适应（2019）：[[论文]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FYou_Universal_Domain_Adaptation_CVPR_2019_paper.pdf) [[笔记]](notes\u002F87_Universal_DA.pdf)\n- 可迁移的对抗训练（2019）：[[论文]](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fliu19b\u002Fliu19b.pdf) [[笔记]](notes\u002F86_TDT.pdf)\n- 多对抗领域适应（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.02176) [[笔记]](notes\u002F92_multi_adversarial_domain_adaptation.pdf)\n- 条件对抗领域适应（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.10667) [[笔记]](notes\u002F85_CDAN.pdf)\n- 学习对抗公平且可迁移的表示（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.06309) [[笔记]](notes\u002F88_learning_adv_fair_and_tsf_repres.pdf)\n- 在深度学习中，重要性加权的作用是什么？（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.03372) [[笔记]](notes\u002F89_effect_of_importance_weighting.pdf)\n\n## 可解释性\n\n- 朝向解释与缓解自然语言理解模型的捷径学习行为（2021）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06922) [[笔记]](notes\u002FTowards_Interpreting_and_Mitigating_Shortcut_Learning_Behavior_of_NLU_Models.pdf)\n- 超越注意力可视化之外的Transformer可解释性（2020）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.09838) [[笔记]](notes\u002FTransformer_Interpretability_Beyond_Attention_Visualization.pdf)\n- 什么塑造了特征表示？探索数据集、架构和训练（2020）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12433) [[笔记]](notes\u002FWhat_shapes_feature_representations_Exploring_datasets_architectures_and_training.pdf)\n- 基于注意力的丢弃层用于弱监督目标定位（2019）：[[论文]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FChoe_Attention-Based_Dropout_Layer_for_Weakly_Supervised_Object_Localization_CVPR_2019_paper.pdf) [[笔记]](notes\u002F58_attention_based_dropout.pdf)\n- 注意力并非解释（2019）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.10186) [[笔记]](notes\u002FAttention_is_not_Explanation.pdf)\n- SmoothGrad：通过添加噪声来去除噪声（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.03825) [[笔记]](notes\u002FSmoothGrad_removing_noise_by_adding_noise.pdf)\n- 深度网络的公理化归因（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01365) [[笔记]](notes\u002FAxiomatic_Attribution_for_Deep_Networks.pdf)\n- 注意力分支网络：用于视觉解释的注意力机制学习（2019）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.10025) [[笔记]](notes\u002F57_attention_branch_netwrok.pdf)\n- 更加关注注意力：通过注意力迁移提升CNN性能（2016）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.03928) [[笔记]](notes\u002F71_attention_transfer.pdf)\n\n## 自然语言处理（NLP）\n\n- 预训练、提示与预测：自然语言处理中提示方法的系统综述（2021）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.13586) [[笔记]](notes\u002FPre-train,_Prompt,_and_Predict:_A_Systematic_Survey_of_Prompting_Methods_in_Natural_Language_Processing.pdf)\n- 无监督数据增强：使用朴素增强且无需未标注数据（2020）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11966) [[笔记]](notes\u002F102_uda_with_naive_aug.pdf)\n- 用于预训练语言模型微调的监督对比学习（2021）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.01403) [[笔记]](notes\u002F104_supervised_const_for_fine_tuning.pdf)\n- BERT与PALs：用于多任务学习中高效适应的投影注意力层（2020）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.02671) [[笔记]](notes\u002F99_bert_and_pals.pdf)\n- FreeLB：增强型对抗训练用于自然语言理解（2020）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.11764) [[笔记]](notes\u002F101_freeLB.pdf)\n- MixText：面向半监督文本分类的语言学启发式插值（2020）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.12239) [[笔记]](notes\u002F100_mixtext.pdf)\n\n## 生成模型\n\n- 从像素进行生成式预训练（2020）：[[论文]](https:\u002F\u002Fcdn.openai.com\u002Fpapers\u002FGenerative_Pretraining_from_Pixels_V2.pdf) [[笔记]](notes\u002F96_generative_pretraining_from_pixels.pdf)\n- 生成对抗网络的一致性正则化（2020）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1910.12027) [[笔记]](notes\u002F84_CR_GANs.pdf)\n\n## 无监督学习\n- 不变信息聚类用于无监督图像分类与分割（2019）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.06653) [[笔记]](notes\u002F78_IIC.pdf)\n- 用于视觉特征无监督学习的深度聚类（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.05520) [[笔记]](notes\u002F70_deep_clustering_for_un_visual_features.pdf)\n\n## 语义分割\n- DeepLabv3+：带空洞可分离卷积的编码器-解码器（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.02611) [[笔记]](notes\u002F26_deeplabv3+.pdf)\n- 大卷积核很重要：通过全局卷积网络改进语义分割（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.02719) [[笔记]](notes\u002F28_large_kernel_maters.pdf)\n- 理解卷积在语义分割中的作用（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.08502) [[笔记]](notes\u002F29_understanding_conv_for_sem_seg.pdf)\n- 重新思考语义图像分割中的空洞卷积（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.05587) [[笔记]](notes\u002F25_deeplab_v3.pdf)\n- RefineNet：用于高分辨率语义分割的多路径细化网络（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.06612) [[笔记]](notes\u002F31_refinenet.pdf)\n- Pyramid Scene Parsing Network（2017）：[[论文]](http:\u002F\u002Fjiaya.me\u002Fpapers\u002FPSPNet_cvpr17.pdf) [[笔记]](notes\u002F22_pspnet.pdf)\n- SegNet：用于图像分割的深度卷积编码器-解码器架构（2016）：[[论文]](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1511.00561) [[笔记]](notes\u002F21_segnet.pdf)\n- ENet：用于实时语义分割的深度神经网络架构（2016）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.02147) [[笔记]](notes\u002F27_enet.pdf)\n- 关注尺度：尺度感知的语义图像分割（2016）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.03339) [[笔记]](notes\u002F30_atttention_to_scale.pdf)\n- Deeplab：使用DCNN、空洞卷积和CRF进行语义图像分割（2016）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.00915) [[笔记]](notes\u002F23_deeplab_v2.pdf)\n- U-Net：用于生物医学图像分割的卷积网络（2015）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1505.04597) [[笔记]](notes\u002F20_Unet.pdf)\n- 用于语义分割的全卷积网络（2015）：[[论文]](https:\u002F\u002Fpeople.eecs.berkeley.edu\u002F~jonlong\u002Flong_shelhamer_fcn.pdf) [[笔记]](notes\u002F19_FCN.pdf)\n- 超列用于目标分割与细粒度定位（2015）：[[论文]](http:\u002F\u002Fhome.bharathh.info\u002Fpubs\u002Fpdfs\u002FBharathCVPR2015.pdf) [[笔记]](notes\u002F24_hypercolumns.pdf)\n\n## 弱监督与半监督语义分割\n- 基于框的类别级区域掩码及填充率引导损失（2019）：[[论文]](http:\u002F\u002Farxiv.org\u002Fabs\u002F1904.11693) [[笔记]](notes\u002F54_boxe_driven_weakly_segmentation.pdf)\n- FickleNet：利用随机推理进行弱监督与半监督语义分割（2019）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.10421) [[笔记]](notes\u002F49_ficklenet.pdf)\n- 基于深度种子区域生长的弱监督语义分割网络（2018）：[[论文]](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fpapers\u002FHuang_Weakly-Supervised_Semantic_Segmentation_CVPR_2018_paper.pdf) [[笔记]](notes\u002F53_deep_seeded_region_growing.pdf)\n- 在图像级监督下学习像素级语义亲和力（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.10464) [[笔记]](notes\u002F81_affinity_for_ws_segmentation.pdf)\n- 利用对抗性擦除进行目标区域挖掘（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.08448) [[笔记]](notes\u002F51_object_region_manning_for_sem_seg.pdf)\n- 重温空洞卷积：一种用于弱监督与半监督分割的简单方法（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.04574) [[笔记]](notes\u002F52_dilates_convolution_semi_super_segmentation.pdf)\n- 告诉我该看哪里：引导式注意力推理网络（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.10171) [[笔记]](notes\u002F50_tell_me_where_to_look.pdf)\n- 使用生成对抗网络的半监督语义分割（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.09695) [[笔记]](notes\u002F82_ss_segmentation_gans.pdf)\n- 用于半监督语义分割的解耦深度神经网络（2015）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.04924) [[笔记]](notes\u002F47_decoupled_nn_for_segmentation.pdf)\n- 用于语义图像分割的DCNN的弱监督与半监督学习（2015）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1502.02734) [[笔记]](notes\u002F48_weakly_and_ss_for_segmentation.pdf)\n\n\n## 信息检索\n- VSE++：通过硬负样本改进视觉-语义嵌入（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.05612) [[笔记]](notes\u002F77_vse++.pdf)\n\n\n## 图神经网络\n- 基于关联嵌入的像素到图转换（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.07365) [[笔记]](notes\u002F36_pixels_to_graphs.pdf)\n- 关联嵌入：用于联合检测与分组的端到端学习（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.05424) [[笔记]](notes\u002F35_associative_emb.pdf)\n- 用于学习对象、关系和物理规律的交互网络（2016）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.00222) [[笔记]](notes\u002F18_interaction_nets.pdf)\n- DeepWalk：社交表示的在线学习（2014）：[[论文]](http:\u002F\u002Fwww.perozzi.net\u002Fpublications\u002F14_kdd_deepwalk.pdf) [[笔记]](notes\u002Fdeep_walk.pdf)\n- 图神经网络模型（2009）：[[论文]](https:\u002F\u002Fpersagen.com\u002Ffiles\u002Fmisc\u002Fscarselli2009graph.pdf) [[笔记]](notes\u002Fgraph_neural_nets.pdf)\n\n## 正则化\n- 流形混合：通过插值隐藏状态获得更好的表示（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.05236) [[笔记]](notes\u002F43_manifold_mixup.pdf)\n\n## 深度学习方法与模型\n- AutoAugment（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.09501) [[笔记]](notes\u002F41_autoaugment.pdf)\n- 堆叠沙漏网络（2017）：[[论文]](http:\u002F\u002Fismir2018.ircam.fr\u002Fdoc\u002Fpdfs\u002F138_Paper.pdf) [[笔记]](notes\u002F34_stacked_hourglass.pdf)\n\n\n## 文档分析与分割\n- dhSegment：一种用于文档分割的通用深度学习方法（2018）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.10371) [[笔记]](notes\u002FdhSegement.pdf)\n- 利用多模态全卷积神经网络学习从文档中提取语义结构（2017）：[[论文]](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.02337) [[笔记]](notes\u002Flearning_to_extract.pdf)\n- 利用条件随机场对历史手写文档图像进行页面分割（2016）：[[论文]](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F312486501_Page_Segmentation_for_Historical_Handwritten_Document_Images_Using_Conditional_Random_Fields) [[笔记]](notes\u002Fseg_with_CRFs.pdf)\n- ICDAR 2015历史文档文本行检测竞赛（2015）：[[论文]](http:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F7333945\u002F) [[笔记]](notes\u002FICDAR2015.pdf)\n- 利用全卷积网络进行手写文本行分割（2017）：[[论文]](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8270267\u002F) [[笔记]](notes\u002Fhandwritten_text_seg_FCN.pdf)\n- 用于大词汇量手写文本识别的深度神经网络（2015）：[[论文]](https:\u002F\u002Ftel.archives-ouvertes.fr\u002Ftel-01249405\u002Fdocument) [[笔记]](notes\u002Fandwriten_text_recognition.pdf)\n- 利用卷积自编码器对历史文档图像进行页面分割（2015）：[[论文]](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F7333914\u002F) [[笔记]](notes\u002Fsegmentation_with_CAE.pdf)","# ML-paper-notes 快速上手指南\n\n`ML-paper-notes` 并非一个需要编译运行的软件库，而是一个**机器学习论文笔记资源集合**。该仓库整理了大量关于自监督学习、半监督学习、视频理解等领域的经典论文，并提供了作者手写的精简版 PDF 笔记。\n\n本指南旨在帮助开发者快速获取并阅读这些高质量的学习资料。\n\n## 环境准备\n\n本项目无需特定的操作系统或复杂的依赖环境，只需具备以下条件即可：\n*   **操作系统**：Windows, macOS 或 Linux 均可。\n*   **必备工具**：\n    *   `git`：用于克隆代码仓库。\n    *   PDF 阅读器：如 Adobe Acrobat, Chrome 浏览器，或 Linux 下的 `evince`\u002F`okular` 等，用于查看笔记文件。\n*   **网络环境**：由于原始论文链接多指向 arXiv 或 CVPR 官网，国内访问可能较慢。建议配置科学上网环境，或在下载论文时使用国内学术镜像（如 ArXiv 国内镜像站）。\n\n## 安装步骤\n\n通过 Git 将仓库克隆到本地即可“安装”完成。\n\n### 1. 克隆仓库\n打开终端（Terminal 或 CMD），执行以下命令：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fkhanrc\u002Fml-paper-notes.git\n```\n\n> **提示**：如果 GitHub 连接缓慢，可使用国内镜像源加速：\n> ```bash\n> git clone https:\u002F\u002Fgitee.com\u002Fmirrors\u002Fml-paper-notes.git\n> ```\n> *(注：若 Gitee 镜像未同步最新内容，请尝试使用 GitHub 官方地址配合加速工具)*\n\n### 2. 进入目录\n```bash\ncd ml-paper-notes\n```\n\n此时，所有论文笔记均已下载到本地的 `notes\u002F` 文件夹中。\n\n## 基本使用\n\n本项目的核心用法是**直接阅读 `notes` 目录下的 PDF 文件**。每个 PDF 对应一篇论文的精华总结。\n\n### 1. 浏览笔记列表\n你可以直接在文件管理器中打开 `notes` 文件夹，或者在终端中列出所有可用的笔记文件：\n\n```bash\nls notes\u002F\n```\n\n你将看到类似以下的文件名（对应 README 中的分类）：\n*   `95_big_self-supervised_models.pdf` (自监督学习)\n*   `45_mixmatch.pdf` (半监督学习)\n*   `SlowFast_Networks_for_Video_Recognition.pdf` (视频理解)\n*   `85_CDAN.pdf` (域适应)\n\n### 2. 阅读特定笔记\n根据你的研究兴趣，直接使用 PDF 阅读器打开对应文件。\n\n**Linux\u002FMac 命令行示例：**\n```bash\n# 使用默认浏览器打开 MixMatch 的笔记\nopen notes\u002F45_mixmatch.pdf \n\n# 或者使用 evince (Linux)\nevince notes\u002F45_mixmatch.pdf\n```\n\n**Windows 命令行示例：**\n```cmd\nstart notes\\45_mixmatch.pdf\n```\n\n### 3. 对照原文阅读\n每个笔记文件在 README 中都有对应的原始论文链接。建议的阅读流程是：\n1.  先阅读 `notes\u002Fxxx.pdf` 快速掌握论文核心思想、模型架构和实验结论。\n2.  点击 README 中对应的 `[[Paper]]` 链接下载或在线阅读完整论文进行深入细节研究。\n\n---\n**主要涵盖领域速查：**\n*   **Self-Supervised & Contrastive Learning**: 自监督与对比学习（如 SimCLR, MoCo 相关前身研究）\n*   **Semi-Supervised Learning**: 半监督学习（如 MixMatch, FixMatch, Mean Teacher）\n*   **Video Understanding**: 视频理解与时序动作定位（如 SlowFast, TSM, Video Transformer）\n*   **Domain Adaptation**: 域适应与泛化","某计算机视觉团队的算法工程师正在为医疗影像项目调研最新的自监督学习方案，以解决标注数据稀缺的难题。\n\n### 没有 ML-paper-notes 时\n- **检索效率低下**：需要在 arXiv 上手动搜索海量论文，难以快速筛选出针对“旋转特征解耦”或“去偏对比学习”等特定技术点的核心文献。\n- **理解成本高昂**：面对复杂的数学公式和冗长的实验章节，需花费数天逐篇精读才能提炼出可复用的核心思想，严重拖慢研发进度。\n- **知识体系碎片化**：笔记散落在个人文档或便签中，缺乏按主题（如半监督学习、对比学习）的系统整理，难以横向对比不同方法的优劣。\n- **复现方向迷茫**：由于缺乏对前人工作局限性的总结，容易在已被证伪或效果不佳的技术路线上浪费宝贵的算力资源。\n\n### 使用 ML-paper-notes 后\n- **精准定位文献**：直接通过目录找到《Self-Supervised Representation Learning by Rotation Feature Decoupling》等高度匹配论文的精选笔记，几分钟内锁定关键技术。\n- **快速掌握精髓**：阅读结构化的 PDF 摘要，迅速理解模型架构创新点与实验结论，将单篇论文的理解时间从数天压缩至半小时。\n- **构建系统认知**：利用其按主题分类的笔记库，清晰梳理出自监督学习从“拼图预测”到“对比学习”的技术演进脉络，辅助制定技术路线图。\n- **规避试错陷阱**：参考笔记中关于各方法适用场景与缺陷的分析，直接避开低效方案，将精力集中在最有潜力的算法改进上。\n\nML-paper-notes 通过将晦涩的学术论文转化为结构化的知识图谱，极大缩短了从理论调研到工程落地的周期。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fyassouali_ML-paper-notes_082b6142.png","yassouali","Yassine","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fyassouali_24dd12d6.png","CS PhD student in ML",null,"yass_ouali","yassouali.github.io","https:\u002F\u002Fgithub.com\u002Fyassouali",564,78,"2026-04-05T15:26:24",1,"","未说明",{"notes":87,"python":85,"dependencies":88},"该仓库仅为机器学习论文的笔记和摘要集合（PDF 格式），不包含可执行的源代码、模型训练脚本或推理程序，因此无需特定的操作系统、GPU、内存、Python 版本或依赖库即可使用。用户只需具备 PDF 阅读器即可查看内容。",[],[15,14,35],[91,92,93,94,95,96],"machine-learning","computer-vision","natural-language-processing","summary","deep-learning","nlp","2026-03-27T02:49:30.150509","2026-04-10T06:33:31.900893",[],[]]