[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-stratosphereips--awesome-ml-privacy-attacks":3,"tool-stratosphereips--awesome-ml-privacy-attacks":61},[4,18,26,36,44,52],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",141543,2,"2026-04-06T11:32:54",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107888,"2026-04-06T11:32:50",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":10,"last_commit_at":50,"category_tags":51,"status":17},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[35,15,13,14],{"id":53,"name":54,"github_repo":55,"description_zh":56,"stars":57,"difficulty_score":10,"last_commit_at":58,"category_tags":59,"status":17},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[14,15,13,60],"视频",{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":76,"owner_twitter":77,"owner_website":78,"owner_url":79,"languages":76,"stars":80,"forks":81,"last_commit_at":82,"license":76,"difficulty_score":83,"env_os":84,"env_gpu":85,"env_ram":85,"env_deps":86,"category_tags":89,"github_topics":90,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":95,"updated_at":96,"faqs":97,"releases":98},4570,"stratosphereips\u002Fawesome-ml-privacy-attacks","awesome-ml-privacy-attacks","An awesome list of papers on privacy attacks against machine learning","awesome-ml-privacy-attacks 是一个专注于机器学习隐私攻击领域的精选资源库。它系统性地整理了大量关于如何攻击机器学习模型隐私的学术论文，并尽可能提供了对应的代码实现。\n\n在人工智能快速发展的今天，模型往往面临成员推断、数据重建、属性推断及模型提取等隐私泄露风险。awesome-ml-privacy-attacks 旨在解决研究人员和开发者难以全面追踪该领域最新进展、复现攻击实验或寻找评估工具的痛点。它不仅收录了从基础理论到前沿技术的百余篇核心论文，还分类汇总了如 PrivacyRaven、TensorFlow Privacy 等实用的隐私测试工具，帮助用户快速构建攻防知识体系。\n\n这份资源特别适合从事 AI 安全研究的研究人员、需要评估模型隐私风险的算法工程师，以及对数据隐私保护感兴趣的技术爱好者。通过参考其中的综述文章和代码示例，用户可以深入理解隐私攻击的原理，从而设计出更健壮的防御机制。awesome-ml-privacy-attacks 以开源协作的方式持续更新，是探索机器学习隐私与安全不可或缺的知识地图。","# Awesome Attacks on Machine Learning Privacy   [![Awesome](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstratosphereips_awesome-ml-privacy-attacks_readme_e4548dc55ca3.png\u002Fbadge.svg)](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstratosphereips_awesome-ml-privacy-attacks_readme_e4548dc55ca3.png)\nThis repository contains a curated list of papers related to privacy attacks against machine learning. A code repository is provided when available by the authors. For corrections, suggestions, or missing papers, please either open an issue or submit a pull request.\n\n# Contents\n- [Awesome Attacks on Machine Learning Privacy   ![Awesome](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstratosphereips_awesome-ml-privacy-attacks_readme_e4548dc55ca3.png)](#awesome-attacks-on-machine-learning-privacy-img-srchttpsawesomerebadgesvg-altawesome)\n- [Contents](#contents)\n- [Surveys and Overviews](#surveys-and-overviews)\n- [Privacy Testing Tools](#privacy-testing-tools)\n- [Papers and Code](#papers-and-code)\n  - [Membership inference](#membership-inference)\n  - [Reconstruction](#reconstruction)\n  - [Property inference\u002FDistribution inference](#property-inference)\n  - [Model extraction](#model-extraction)\n- [Other](#other)\n\n# Surveys and Overviews\n- [**SoK: Model Inversion Attack Landscape: Taxonomy, Challenges, and Future Roadmap**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10221914) (Sayanton Dibbo, 2023)\n- [**A Survey of Privacy Attacks in Machine Learning**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3624010) (Rigaki and Garcia, 2023)\n- [**An Overview of Privacy in Machine Learning**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.08679) (De Cristofaro, 2020)\n- [**Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.11601) (Fan et al., 2020)\n- [**Privacy and Security Issues in Deep Learning: A Survey**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9294026) (Liu et al., 2021)\n- [**ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.02551) (Liu et al., 2021)\n- [**Membership Inference Attacks on Machine Learning: A Survey**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.07853) (Hu et al., 2021)\n- [**Survey: Leakage and Privacy at Inference Time**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.01614) (Jegorova et al., 2021)\n- [**A Review of Confidentiality Threats Against Embedded Neural Network Models**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.01401) (Joud et al., 2021)\n- [**Federated Learning Attacks Revisited: A Critical Discussion of Gaps,Assumptions, and Evaluation Setups**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.03363) (Wainakh et al., 2021)\n- [**I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.08451) (Oliynyk et al., 2022)\n\n# Privacy Testing Tools\n- [**PrivacyRaven**](https:\u002F\u002Fgithub.com\u002Ftrailofbits\u002FPrivacyRaven) (Trail of Bits)\n- [**TensorFlow Privacy**](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprivacy\u002Ftree\u002Fmaster\u002Ftensorflow_privacy\u002Fprivacy\u002Fmembership_inference_attack) (TensorFlow)\n- [**Machine Learning Privacy Meter**](https:\u002F\u002Fgithub.com\u002Fprivacytrustlab\u002Fml_privacy_meter) (NUS Data Privacy and Trustworthy Machine Learning Lab)\n- [**CypherCat (archive-only)**](https:\u002F\u002Fgithub.com\u002FLab41\u002Fcyphercat) (IQT Labs\u002FLab 41)\n- [**Adversarial Robustness Toolbox (ART)**](https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002Fadversarial-robustness-toolbox) (IBM)\n \n# Papers and Code\n\n## Membership inference\nA curated list of membership inference papers (more than 100 papers) on machine learning models is available at [this repository](https:\u002F\u002Fgithub.com\u002FHongshengHu\u002Fmembership-inference-machine-learning-literature).\n- [**Membership inference attacks against machine learning models**](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=7958568) (Shokri et al., 2017) ([code](https:\u002F\u002Fgithub.com\u002Fcsong27\u002Fmembership-inference))\n- [**Understanding membership inferences on well-generalized learning models**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.04889)(Long et al., 2018)\n- [**Privacy risk in machine learning: Analyzing the connection to overfitting**](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8429311), (Yeom et al., 2018) ([code](https:\u002F\u002Fgithub.com\u002Fsamuel-yeom\u002Fml-privacy-csf18))\n- [**Membership inference attack against differentially private deep learning model**](http:\u002F\u002Fwww.tdp.cat\u002Fissues16\u002Ftdp.a289a17.pdf) (Rahman et al., 2018)\n- [**Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning.**](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=8835245) (Nasr et al., 2019) ([code](https:\u002F\u002Fgithub.com\u002Fprivacytrustlab\u002Fml_privacy_meter))\n- [**Logan: Membership inference attacks against generative models.**](https:\u002F\u002Fcontent.sciendo.com\u002Fdownloadpdf\u002Fjournals\u002Fpopets\u002F2019\u002F1\u002Farticle-p133.xml) (Hayes et al. 2019) ([code](https:\u002F\u002Fgithub.com\u002Fjhayes14\u002Fgen_mem_inf))\n- [**Evaluating differentially private machine learning in practice**](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fsec19-jayaraman.pdf) (Jayaraman and Evans, 2019) ([code](https:\u002F\u002Fgithub.com\u002Fbargavj\u002FEvaluatingDPML)) \n- [**Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models**](https:\u002F\u002Fwww.ndss-symposium.org\u002Fwp-content\u002Fuploads\u002F2019\u002F02\u002Fndss2019_03A-1_Salem_paper.pdf) (Salem et al., 2019) ([code](https:\u002F\u002Fgithub.com\u002FAhmedSalem2\u002FML-Leaks))\n- [**Privacy risks of securing machine learning models against adversarial examples**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3319535.3354211) (Song L. et al., 2019) ([code](https:\u002F\u002Fgithub.com\u002Finspire-group\u002Fprivacy-vs-robustness))\n- [**White-box vs Black-box: Bayes Optimal Strategies for Membership Inference**](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fsablayrolles19a.html) (Sablayrolles et al., 2019)\n- [**Privacy risks of explaining machine learning models**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.00164) (Shokri et al., 2019)\n- [**Demystifying membership inference attacks in machine learning as a service**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8634878) (Truex et al., 2019)\n- [**Monte carlo and reconstruction membership inference attacks against generative models**](https:\u002F\u002Fcontent.sciendo.com\u002Fview\u002Fjournals\u002Fpopets\u002F2019\u002F4\u002Farticle-p232.xml) (Hilprecht et al., 2019)\n- [**MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.10594) (Jia et al., 2019) ([code](https:\u002F\u002Fgithub.com\u002Fjjy1994\u002FMemGuard))\n- [**Gan-leaks: A taxonomy of membership inference attacks against gans**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.03935.pdf) (Chen,et al., 2019))\n- [**Auditing Data Provenance in Text-Generation Models**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3292500.3330885) (Song and Shmatikov, 2019)\n- [**Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System?**](https:\u002F\u002Fwww.mitpressjournals.org\u002Fdoi\u002Ffull\u002F10.1162\u002Ftacl_a_00299) (Hisamoto et al., 2020) \n- [**Revisiting Membership Inference Under Realistic Assumptions**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.10881.pdf) (Jayaraman et al., 2020)\n- [**When Machine Unlearning Jeopardizes Privacy**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.02205.pdf) (Chen et al., 2020)\n- [**Modelling and Quantifying Membership Information Leakage in Machine Learning**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2001.10648.pdf) (Farokhi and Kaafar, 2020)\n- [**Systematic Evaluation of Privacy Risks of Machine Learning Models**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.10595) (Song and Mittal, 2020) ([code](https:\u002F\u002Fgithub.com\u002Finspire-group\u002Fmembership-inference-evaluation))\n- [**Towards the Infeasibility of Membership Inference on Deep Models**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.13702.pdf) (Rezaei and Liu, 2020) ([code](https:\u002F\u002Fgithub.com\u002Fshrezaei\u002FMI-Attack))\n- [**Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.11798) (Leino and Fredrikson, 2020) \n- [**Label-Only Membership Inference Attacks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.14321) (Choquette Choo et al., 2020) \n- [**Label-Leaks: Membership Inference Attack with Label**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.15528) (Li and Zhang, 2020) \n- [**Alleviating Privacy Attacks via Causal Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.12732) (Tople et al., 2020)\n- [**On the Effectiveness of Regularization Against Membership Inference Attacks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.05336) (Kaya et al., 2020)\n- [**Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.00395) (Rahimian et al., 2020)\n- [**Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.09685) (He et al., 2019)\n- [**Differential Privacy Defenses and Sampling Attacks for Membership Inference**](https:\u002F\u002Fpriml-workshop.github.io\u002Fpriml2019\u002Fpapers\u002FPriML2019_paper_47.pdf) (Rahimian et al., 2019)\n- [**privGAN: Protecting GANs from membership inference attacks at low cost**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.00071) (Mukherjee et al., 2020)\n- [**Sharing Models or Coresets: A Study based on Membership Inference Attack**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.02977) (Lu et al., 2020)\n- [**Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.04872) (Zou et al., 2020)\n- [**Quantifying Membership Inference Vulnerability via Generalization Gap and Other Model Metrics**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.05669) (Bentley et al., 2020)\n- [**MACE: A Flexible Framework for Membership Privacy Estimation in Generative Models**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.05683) (Liu et al., 2020)\n- [**On Primes, Log-Loss Scores and (No) Privacy**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.08559) (Aggarwal et al., 2020)\n- [**MCMIA: Model Compression Against Membership Inference Attack in Deep Neural Networks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.13578) (Wang et al., 2020)\n- [**Bootstrap Aggregation for Point-based Generalized Membership Inference Attacks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.08738) (Felps et al., 2020)\n- [**Differentially Private Learning Does Not Bound Membership Inference**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.12112) (Humphries et al., 2020)\n- [**Quantifying Membership Privacy via Information Leakage**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.05965) (Saeidian et al., 2020)\n- [**Disparate Vulnerability: on the Unfairness of Privacy Attacks Against Machine Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.00389) (Yaghini et al., 2020)\n- [**Use the Spear as a Shield: A Novel Adversarial Example based Privacy-Preserving Technique against Membership Inference Attacks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.13696) (Xue et al., 2020)\n- [**Towards Realistic Membership Inferences: The Case of Survey Data**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3427228.3427282?casa_token=eHK7DPiTIigAAAAA:sinfqtYoQA8GddIiwn28DYNEG1NsvW42wvUnRLkpBGQKhrI_mawTRV8MOmLGotqaTspYS-eOIp56UQ)\n- [**Unexpected Information Leakage of Differential Privacy Due to Linear Property of Queries**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.08958) (Huang et al., 2020)\n- [**TransMIA: Membership Inference Attacks Using Transfer Shadow Training**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.14661) (Hidano et al., 2020)\n- [**An Extension of Fano's Inequality for Characterizing Model Susceptibility to Membership Inference Attacks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.08097) (Jha et al., 2020)\n- [**Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.04535) (Nasr et al., 2021)\n- [**Membership Inference Attack with Multi-Grade Service Models in Edge Intelligence**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9355044?casa_token=jQafisVyDTsAAAAA:hjGRYTW8BWFnmvVnXdsrv42HQoBJ0EdzBhJf34nAf2BWlydneTb6Ll8GVnVdGfI4M4S53r3hMw) (Wang et al., 2021)\n- [**Reconstruction-Based Membership Inference Attacks are Easier on Difficult Problems**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.07762) (Shafran et al., 2021)\n- [**Membership Inference Attacks on Deep Regression Models for Neuroimaging**](https:\u002F\u002Fopenreview.net\u002Fforum?id=8lL_y9n-CV) (Gupta et al., 2021)\n- [**Node-Level Membership Inference Attacks Against Graph Neural Networks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.05429) (He et al., 2021)\n- [**Practical Blind Membership Inference Attack via Differential Comparisons**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.01341) (Hui et al., 2021)\n- [**ADePT: Auto-encoder based Differentially Private Text Transformation**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.01502) (Krishna et al., 2021)\n- [**Source Inference Attacks in Federated Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.05659) (Hu et al., 2021) ([code](https:\u002F\u002Fgithub.com\u002FHongshengHu\u002Fsource-inference-FL))\n- [**The Influence of Dropout on Membership Inference in Differentially Private Models**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.09008) (Galinkin, 2021)\n- [**Membership Inference Attack Susceptibility of Clinical Language Models**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.08305) (Jagannatha et al., 2021)\n- [**Membership Inference Attacks on Knowledge Graphs**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.08273) (Wang & Sun, 2021)\n- [**When Does Data Augmentation Help With Membership Inference Attacks?**](http:\u002F\u002Fproceedings.mlr.press\u002Fv139\u002Fkaya21a.html) (Kaya and Dumitras, 2021)\n- [**The Influence of Training Parameters and Architectural Choices on the Vulnerability of Neural Networks to Membership Inference Attacks**](https:\u002F\u002Fwww.mi.fu-berlin.de\u002Finf\u002Fgroups\u002Fag-idm\u002Ftheseses\u002F2021_oussama_bouanani_bsc_thesis.pdf) (Bouanani, 2021)\n- [**Membership Inference on Word Embedding and Beyond**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.11384) (Mahloujifar et al., 2021)\n- [**TableGAN-MCA: Evaluating Membership Collisions of GAN-Synthesized Tabular Data Releasing**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.13190) (Hu et al., 2021)\n- [**Enhanced Membership Inference Attacks against Machine Learning Models**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09679) (Ye et al., 2021)\n- [**Do Not Trust Prediction Scores for Membership Inference Attacks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09076) (Hintersdorf et al., 2021)\n- [**Membership Inference via Backdooring**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04823) (Hu et al. 2022)\n\n\n\n## Reconstruction\nReconstruction attacks cover also attacks known as *model inversion* and *attribute inference*.\n- [**Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing**](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fconference\u002Fusenixsecurity14\u002Fsec14-paper-fredrikson-privacy.pdf) (Fredrikson et al., 2014)\n- [**Model inversion attacks that exploit confidence information and basic countermeasures**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F2810103.2813677) (Fredrikson et al., 2015) ([code](https:\u002F\u002Fgithub.com\u002Fyashkant\u002FModel-Inversion-Attack))\n- [**A methodology for formalizing model-inversion attacks**](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=7536387) (Wu et al., 2016)\n- [**Deep models under the gan: Information leakage from collaborative deep learning**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3133956.3134012) (Hitaj et al., 2017) \n- [**Machine learning models that remember too much**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3133956.3134077) (Song, C. et al., 2017) ([code](https:\u002F\u002Fgithub.com\u002Fcsong27\u002Fml-model-remember))\n- [**Model inversion attacks for prediction systems: Without knowledge of non-sensitive attributes**](https:\u002F\u002Fieeexplore.ieee.org\u002Fiel7\u002F8476191\u002F8476869\u002F08476925.pdf?casa_token=VQ_s2jcJFp8AAAAA:Hg-wdpPcESm9UUsZHxCLzIvYqVEqW11_OCXEyjARxW5K2cFYi6EFNXlH8IKKjNSgv6oQoQJlsw) (Hidano et al., 2017)\n- [**The secret sharer: Evaluating and testing unintended memorization in neural networks**](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fsec19-carlini.pdf) (Carlini et al., 2019)\n- [**Deep leakage from gradients**](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9617-deep-leakage-from-gradients.pdf) (Zhu et al., 2019) ([code](https:\u002F\u002Fgithub.com\u002Fmit-han-lab\u002Fdlg))\n- [**Model inversion attacks against collaborative inference**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3359789.3359824) (He et al., 2019) ([code](https:\u002F\u002Fgithub.com\u002Fzechenghe\u002FInverse_Collaborative_Inference))\n- [**Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning**](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8737416) (Wang et al., 2019)\n- [**Neural network inversion in adversarial setting via background knowledge alignment**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3319535.3354261?casa_token=lDNQ40-4Wa4AAAAA:p9olQ3qMdDZ0n2sl-nNIgk4sOuLRMBTGVTxycZ5wjGpnFPf5lTz-MYw0e8ISggSseHC9T46it5yX) (Yang et al., 2019)\n- [**iDLG: Improved Deep Leakage from Gradients**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2001.02610) (Zhao et al., 2020) ([code](https:\u002F\u002Fgithub.com\u002FPatrickZH\u002FImproved-Deep-Leakage-from-Gradients))\n- [**Privacy Risks of General-Purpose Language Models**](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FXudong_Pan3\u002Fpublication\u002F340965355_Privacy_Risks_of_General-Purpose_Language_Models\u002Flinks\u002F5ea7ca55a6fdccd7945b6a7d\u002FPrivacy-Risks-of-General-Purpose-Language-Models.pdf) (Pan et al., 2020)\n- [**The secret revealer: generative model-inversion attacks against deep neural networks**](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.pdf)) (Zhang et al., 2020)\n- [**Inverting Gradients - How easy is it to break privacy in federated learning?**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.14053) (Geiping et al., 2020)\n- [**GAMIN: An Adversarial Approach to Black-Box Model Inversion**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.11835) (Aivodji et al., 2019)\n- [**Trade-offs and Guarantees of Adversarial Representation Learning for Information Obfuscation**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.07902) (Zhao et al., 2020) \n- [**Reconstruction of training samples from loss functions**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.07337.pdf) (Sannai, 2018)\n- [**A Framework for Evaluating Gradient Leakage Attacks in Federated Learning**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.10397.pdf) (Wei et al., 2020) \n- [**Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1702.07464.pdf) (Hitaj et al., 2017)\n- [**Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1812.00535.pdf) (Wang et al., 2018) \n- [**Exploring Image Reconstruction Attack in Deep Learning Computation Offloading**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3325413.3329791) (Oh and Lee, 2019) \n- [**I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.05847.pdf) (Wei et al., 2019)\n- [**Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.01067) (Salem et al., 2019)\n- [**Illuminating the Dark or how to recover what should not be seen in FE-based classifiers**](https:\u002F\u002Feprint.iacr.org\u002F2018\u002F1001) (Carpov et al., 2020)\n- [**Evaluation Indicator for Model Inversion Attack**](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1rl77BGtGHzZ8obWUEOoqunXCjgvpzE8d\u002Fview) (Tanaka et al., 2020)\n- [**Understanding Unintended Memorization in Federated Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07490) (Thakkar et al., 2020)\n- [**An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack**](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8822435) (Park et al., 2019)\n- [**Reducing Risk of Model Inversion Using Privacy-Guided Training**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.15877) (Goldsteen et al., 2020)\n- [**Robust Transparency Against Model Inversion Attacks**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9178452) (Alufaisan et al., 2020)\n- [**Does AI Remember? Neural Networks and the Right to be Forgotten**](https:\u002F\u002Fuwspace.uwaterloo.ca\u002Fhandle\u002F10012\u002F15754) (Graves et al., 2020)\n- [**Improving Robustness to Model Inversion Attacks via Mutual Information Regularization**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.05241) (Wang et al., 2020)\n- [**SAPAG: A Self-Adaptive Privacy Attack From Gradients**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.06228) (Wang et al., 2020)\n- [**Theory-Oriented Deep Leakage from Gradients via Linear Equation Solver**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.13356) (Pan et al., 2020)\n- [**Improved Techniques for Model Inversion Attacks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.04092) (Chen et al., 2020)\n- [**Black-box Model Inversion Attribute Inference Attacks on Classification Models**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.03404) (Mehnaz et al., 2020)\n- [**Deep Face Recognizer Privacy Attack: Model Inversion Initialization by a Deep Generative Adversarial Data Space Discriminator**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9306253?casa_token=H78uIRJ2smYAAAAA:iQiA_5d2a2mbH4oBF9EZwSjakAz3Muq3ZOkNDBkK_fLq19PEMGEvpipyli7d9SGKESglqIb9Ug) (Khosravy et al., 2020) \n- [**MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11463) (Li et al., 2020)\n- [**Evaluation of Inference Attack Models for Deep Learning on Medical Data**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.00177) (Wu et al., 2020)\n- [**FaceLeaks: Inference Attacks against Transfer Learning Models via Black-box Queries**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.14023) (Liew and Takahashi, 2020)\n- [**Extracting Training Data from Large Language Models**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.07805) (Carlini et al., 2020)\n- [**MIDAS: Model Inversion Defenses Using an Approximate Memory System**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9358254) (Xu et al., 2021)\n- [**KART: Privacy Leakage Framework of Language Models Pre-trained with Clinical Records**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.00036) (Nakamura et al., 2020)\n- [**Derivation of Constraints from Machine Learning Models and Applications to Security and Privacy**](https:\u002F\u002Fhal.archives-ouvertes.fr\u002Fhal-03091740\u002F) (Falaschi et al., 2021)\n- [**On the (In)Feasibility of Attribute Inference Attacks on Machine Learning Models**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.07101) (Zhao et al., 2021)\n- [**Practical Defences Against Model Inversion Attacks for Split Neural Networks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.05743) (Titcombe et al., 2021)\n- [**R-GAP: Recursive Gradient Attack on Privacy**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.07733) (Zhu and Blaschko, 2021)\n- [**Exploiting Explanations for Model Inversion Attacks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.12669) (Zhao et al., 2021)\n- [**SAFELearn: Secure Aggregation for private FEderated Learning**](https:\u002F\u002Fencrypto.de\u002Fpapers\u002FFMMMMNRSSYZ21.pdf) (Fereidooni et al., 2021)\n- [**Does BERT Pretrained on Clinical Notes Reveal Sensitive Data?**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.07762) (Lehman et al., 2021)\n- [**Training Data Leakage Analysis in Language Models**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.05405) (Inan et al., 2021)\n- [**Model Fragmentation, Shuffle and Aggregation to Mitigate Model Inversion in Federated Learning**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9478813?casa_token=047c6zFuwm4AAAAA:h6qWPCm6WXUbtVgk1iATPshiPMfvGEp6lVUrblEm8P2tRX4OIDEDpnzICVwYveoENEnH6Ig-yg) (Masude et al., 2021)\n- [**PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.04725) (Scheliga et al., 2021)\n- [**On the Importance of Encrypting Deep Features**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.07147) (Ni et al., 2021)\n- [**Defending Against Model Inversion Attack by Adversarial Examples**](https:\u002F\u002Fwww.cs.hku.hk\u002Fdata\u002Ftechreps\u002Fdocument\u002FTR-2021-03.pdf) (Wen et al., 2021)\n- [**See through Gradients: Image Batch Recovery via GradInversion**](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FYin_See_Through_Gradients_Image_Batch_Recovery_via_GradInversion_CVPR_2021_paper.pdf) (Yin et al., 2021)\n- [**Variational Model Inversion Attacks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.10787) (Wang et al., 2021)\n- [**Reconstructing Training Data with Informed Adversaries**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.04845) (Balle et al., 2022)\n- [**Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.12179) (Struppek et al., 2022)\n- [**Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.06304) (Dong et al., 2022)\n- [**A Linear Reconstruction Approach for Attribute Inference Attacks against Synthetic Data**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.10053) (Annamalai et al., 2023)\n- [**Analysis and Utilization of Hidden Information in Model Inversion Attacks**](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10184490) (Zhang et al., 2023) ([code](https:\u002F\u002Fgithub.com\u002Fzhangzp9970\u002FAmplified-MIA))\n- [**Text Embeddings Reveal (Almost) As Much As Text**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.06816?ref=upstract.com)(Morris et al., 2023)\n- [**On the Inadequacy of Similarity-based Privacy Metrics: Reconstruction Attacks against \"Truly Anonymous Synthetic Data\"**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05114) (Ganev and De Cristofaro, 2023)\n- [**Model Inversion Attack with Least Information and an In-depth Analysis of its Disparate Vulnerability**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10136179) (Dibbo et al., 2023)\n\n\n## Property inference \u002F Distribution inference\n- [**Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1504\u002FIJSN.2015.071829) (Ateniese et al., 2015)\n- [**Property inference attacks on fully connected neural networks using permutation invariant representations**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3243734.3243834) (Ganju et al., 2018)\n- [**Exploiting unintended feature leakage in collaborative learning**](https:\u002F\u002Fieeexplore.ieee.org\u002Fiel7\u002F8826229\u002F8835208\u002F08835269.pdf) (Melis et al., 2019) ([code](https:\u002F\u002Fgithub.com\u002Fcsong27\u002Fproperty-inference-collaborative-ml))\n- [**Overlearning Reveals Sensitive Attributes**](https:\u002F\u002Fopenreview.net\u002Fpdf?id=SJeNz04tDS) (Song C. et al., 2020) ([code](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1hu0PhN3pWXe6LobxiPFeYBm8L-vQX2zJ\u002Fview?usp=sharing))\n- [**Subject Property Inference Attack in Collaborative Learning**](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9204357) (Xu and Li, 2020)\n- [**Property Inference From Poisoning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.11073) (Chase et al., 2021)\n- [**Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.13061) (Parisot et al., 2021)\n- [**Honest-but-Curious Nets: Sensitive Attributes of Private Inputs can be Secretly Coded into the Entropy of Classifiers' Outputs**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.12049) (Malekzadeh et al. 2021) ([code](https:\u002F\u002Fgithub.com\u002Fmmalekzadeh\u002Fhonest-but-curious-nets))\n- [**Property Inference Attacks Against GANs**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07608) (Zhou et al., 2021) ([code](https:\u002F\u002Fgithub.com\u002FZhou-Junhao\u002FPIA_GAN))\n- [**Formalizing and Estimating Distribution Inference Risks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.06024) (Suri and Evans, 2022) ([code](https:\u002F\u002Fgithub.com\u002Fiamgroot42\u002FFormEstDistRisks))\n- [**Dissecting Distribution Inference**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10136142) (Suri et al., 2023) ([code](https:\u002F\u002Fgithub.com\u002Fiamgroot42\u002Fdissecting_dist_inf))\n- [**SNAP: Efficient Extraction of Private Properties with Poisoning**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10179334) (Chaudhari et al., 2023) ([code](https:\u002F\u002Fgithub.com\u002Fjohnmath\u002Fsnap-sp23))\n\n\n## Model extraction\n- [**Stealing machine learning models via prediction apis**](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fconference\u002Fusenixsecurity16\u002Fsec16_paper_tramer.pdf) (Tramèr et al., 2016) ([code](https:\u002F\u002Fgithub.com\u002Fftramer\u002FSteal-ML)) \n- [**Stealing hyperparameters in machine learning**](https:\u002F\u002Fieeexplore.ieee.org\u002Fiel7\u002F8418581\u002F8418583\u002F08418595.pdf) (Wang B. et al., 2018)\n- [**Copycat CNN: Stealing Knowledge by Persuading Confession with Random Non-Labeled Data**](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8489592) (Correia-Silva et al., 2018) ([code](https:\u002F\u002Fgithub.com\u002Fjeiks\u002FStealing_DL_Models))\n- [**Towards reverse-engineering black-box neural networks.**](https:\u002F\u002Fopenreview.net\u002Fforum?id=BydjJte0-)(Oh et al., 2018) ([code](https:\u002F\u002Fgithub.com\u002Fcoallaoh\u002FWhitenBlackBox))\n- [**Knockoff nets: Stealing functionality of black-box models**](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FOrekondy_Knockoff_Nets_Stealing_Functionality_of_Black-Box_Models_CVPR_2019_paper.pdf) (Orekondy et al., 2019) ([code](https:\u002F\u002Fgithub.com\u002Ftribhuvanesh\u002Fknockoffnets))\n- [**PRADA: protecting against DNN model stealing attacks**](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8806737) (Juuti et al., 2019) ([code](https:\u002F\u002Fgithub.com\u002FSSGAalto\u002Fprada-protecting-against-dnn-model-stealing-attacks))\n- [**Model Reconstruction from Model Explanations**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3287560.3287562) (Milli et al., 2019)\n- [**Exploring connections between active learning and model extraction**](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fsec20summer_chandrasekaran_prepub.pdf) (Chandrasekaran et al., 2020)\n- [**High Accuracy and High Fidelity Extraction of Neural Networks**](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fusenixsecurity20\u002Fpresentation\u002Fjagielski) (Jagielski et al., 2020)\n- [**Thieves on Sesame Street! Model Extraction of BERT-based APIs**](https:\u002F\u002Fopenreview.net\u002Fattachment?id=Byl5NREFDr&name=original_pdf) (Krishna et al., 2020) ([code](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Flanguage\u002Ftree\u002Fmaster\u002Flanguage\u002Fbert_extraction))\n- [**Cryptanalytic Extraction of Neural Network Models**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.04884.pdf) (Carlini et al., 2020) \n- [**CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples**](https:\u002F\u002Fwww.ndss-symposium.org\u002Fndss-paper\u002Fcloudleak-large-scale-deep-learning-models-stealing-through-adversarial-examples\u002F) (Yu et al., 2020) \n- [**ACTIVETHIEF: Model Extraction Using Active Learning and Unannotated Public Data**](https:\u002F\u002Fwww.aaai.org\u002Fojs\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F5432) (Pal et al., 2020) ([code](https:\u002F\u002Fbitbucket.org\u002Fiiscseal\u002Factivethief\u002Fsrc\u002Fmaster))\n- [**Efficiently Stealing your Machine Learning Models**](https:\u002F\u002Fencrypto.de\u002Fpapers\u002FRST19.pdf) (Reith et al., 2019) \n- [**Extraction of Complex DNN Models: Real Threat or Boogeyman?**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1910.05429.pdf) (Atli et al., 2020) \n- [**Stealing Neural Networks via Timing Side Channels**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1812.11720.pdf) (Duddu et al., 2019) \n- [**DeepSniffer: A DNN Model Extraction Framework Based on Learning Architectural Hints**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3373376.3378460) (Hu et al., 2020) ([code](https:\u002F\u002Fgithub.com\u002Fxinghu7788\u002FDeepSniffer))\n- [**CSI NN: Reverse Engineering of Neural Network Architectures Through Electromagnetic Side Channel**](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fsec19-batina.pdf) (Batina et al., 2019)\n- [**Cache Telepathy: Leveraging Shared Resource Attacks to Learn DNN Architectures**](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fusenixsecurity20\u002Fpresentation\u002Fyan) (Yan et al., 2020)\n- [**How to 0wn NAS in Your Spare Time**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.06776) (Hong et al., 2020) ([code](https:\u002F\u002Fgithub.com\u002FSanghyun-Hong\u002FHow-to-0wn-NAS-in-Your-Spare-Time))\n- [**Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.03487) (Hong et al., 2020)\n- [**Reverse-Engineering Deep ReLU Networks**](https:\u002F\u002Fproceedings.icml.cc\u002Fstatic\u002Fpaper_files\u002Ficml\u002F2020\u002F1-Paper.pdf) (Rolnick and Kording, 2020)\n- [**Model Extraction Oriented Data Publishing with k-anonymity**](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58208-1_13) (Fukuoka et al., 2020)\n- [**Hermes Attack: Steal DNN Models with Lossless Inference Accuracy**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12784) (Zhu et al., 2020)\n- [**Model extraction from counterfactual explanations**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.01884) (Aïvodji et al., 2020) ([code](https:\u002F\u002Fgithub.com\u002Faivodji\u002Fmrce))\n- [**MetaSimulator: Simulating Unknown Target Models for Query-Efficient Black-box Attacks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.00960) (Chen and Yong, 2020) ([code](https:\u002F\u002Fgithub.com\u002Fmachanic\u002FMetaSimulator))\n- [**Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.10908) (Orekondy et al., 2019) ([code](https:\u002F\u002Fgithub.com\u002Ftribhuvanesh\u002Fprediction-poisoning))\n- [**IReEn: Iterative Reverse-Engineering of Black-Box Functions via Neural Program Synthesis**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.10720) (Hajipour et al., 2020)\n- [**ES Attack: Model Stealing against Deep Neural Networks without Data Hurdles**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.09560) (Yuan et al., 2020)\n- [**Black-Box Ripper: Copying black-box models using generative evolutionary algorithms**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11158) (Barbalau et al., 2020) ([code](https:\u002F\u002Fgithub.com\u002Fantoniobarbalau\u002Fblack-box-ripper))\n- [**Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.12751) (Wu et al., 2020) \n- [**Model Extraction Attacks and Defenses on Cloud-Based Machine Learning Models**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9311938?casa_token=f_8Lg24vAQkAAAAA:A7P5ym7bTLFIJZtL2yGorscyQC2R1WGJUKzcO-pn8wADHus0w8NArN-nv0JFcKYhwwQFeCaptQ) (Gong et al., 2020)\n- [**Leveraging Extracted Model Adversaries for Improved Black Box Attacks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.16336) (Nizar and Kobren, 2020)\n- [**Differentially Private Machine Learning Model against Model Extraction Attack**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9291542) (Cheng et al., 2020)\n- [**Model Extraction Attacks and Defenses on Cloud-Based Machine Learning Models**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9311938?casa_token=YP1PeB4XPqEAAAAA:q1Ni88642UTpBQ8r7jUe9tbWjMWG9lw3v8CK4g1q7V-ZShK0KonYuMiapY4rXDfKVNST6xLtSw) (Gong et al., 2020)\n- [**Stealing Neural Network Models through the Scan Chain: A New Threat for ML Hardware**](https:\u002F\u002Feprint.iacr.org\u002F2021\u002F167) (Potluri and Aysu, 2021)\n- [**Model Extraction and Defenses on Generative Adversarial Networks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.02069) (Hu and Pang, 2021)\n- [**Protecting Decision Boundary of Machine Learning Model With Differentially Private Perturbation**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9286504) (Zheng et al., 2021)\n- [**Special-Purpose Model Extraction Attacks: Stealing Coarse Model with Fewer Queries**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9343086?casa_token=Fn4CtwOZsbQAAAAA:4n3tZGcwFochwREqn4fRWcmA9YeLRxikwB1LN8t2ui1NbRPHSHjTuoqHrSfP1vxXfecw0kobBQ) (Okada et al., 2021)\n- [**Model Extraction and Adversarial Transferability, Your BERT is Vulnerable!**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.10013) (He et al., 2021) ([code](https:\u002F\u002Fgithub.com\u002Fxlhex\u002Fextract_and_transfer))\n- [**Thief, Beware of What Get You There: Towards Understanding Model Extraction Attack**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.05921) (Zhang et al., 2021)\n- [**Model Weight Theft With Just Noise Inputs: The Curious Case of the Petulant Attacker**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.08987) (Roberts et al., 2019)\n- [**Protecting DNNs from Theft using an Ensemble of Diverse Models**](https:\u002F\u002Fopenreview.net\u002Fforum?id=LucJxySuJcE) (Kariyappa et al., 2021)\n- [**Information Laundering for Model Privacy**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.06112) (Wang et al., 2021)\n- [**Deep Neural Network Fingerprinting by Conferrable Adversarial Examples**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.00888) (Lukas et al., 2021)\n- [**BODAME: Bilevel Optimization for Defense Against Model Extraction**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06797) (Mori et al., 2021)\n- [**Dataset Inference: Ownership Resolution in Machine Learning**](https:\u002F\u002Fopenreview.net\u002Fforum?id=hvdKKV2yt7T) (Maini et al., 2021)\n- [**Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Generative Adversarial Networks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.12623) (Szyller et al., 2021)\n- [**Towards Characterizing Model Extraction Queries and How to Detect Them**](https:\u002F\u002Fwww2.eecs.berkeley.edu\u002FPubs\u002FTechRpts\u002F2021\u002FEECS-2021-126.pdf) (Zhang et al., 2021)\n- [**Hardness of Samples Is All You Need: Protecting Deep Learning Models Using Hardness of Samples**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.11424) (Sadeghzadeh et al., 2021)\n- [**Stateful Detection of Model Extraction Attacks**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.05166) (Pal et al., 2021)\n- [**MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.08909) (Miura et al., 2021)\n- [**INVERSENET: Augmenting Model Extraction Attacks with Training Data Inversion**](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2021\u002F0336.pdf) (Gong et al., 2021)\n- [**Increasing the Cost of Model Extraction with Calibrated Proof of Work**](https:\u002F\u002Fopenreview.net\u002Fforum?id=EAy7C1cgE1L) (Dziedzic et al. 2022) [code](https:\u002F\u002Fgithub.com\u002Fcleverhans-lab\u002Fmodel-extraction-iclr)\n- [**On the Difficulty of Defending Self-Supervised Learning against Model Extraction**](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fdziedzic22a\u002Fdziedzic22a.pdf) (Dziedzic et al., 2022) [code](https:\u002F\u002Fgithub.com\u002Fcleverhans-lab\u002Fssl-attacks-defenses)\n- [**Dataset Inference for Self-Supervised Models**](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002F4ebf0617b32da2cd083c3b17c7285cce-Abstract-Conference.html) (Dziedzic et al., 2022) [code](https:\u002F\u002Fgithub.com\u002Fcleverhans-lab\u002FDatasetInferenceForSelfSupervisedModels)\n- [**Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.07513) (Sha et al., 2022) \n- [**StolenEncoder: Stealing Pre-trained Encoders**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.05889) (Liu et al., 2022)\n- [**Model Extraction Attacks Revisited**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05386) (Liang et al., 2023)\n\n\n# Other\n- [**Prompts Should not be Seen as Secrets: Systematically Measuring Prompt Extraction Attack Success**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.06865)(Zhang et al., 2023)\n- [**Amnesiac Machine Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.10981) (Graves et al., 2020)\n- [**Toward Robustness and Privacy in Federated Learning: Experimenting with Local and Central Differential Privacy**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.03561) (Naseri et al., 2020)\n- [**Analyzing Information Leakage of Updates to Natural Language Models**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.07942) (Brockschmidt et al., 2020)\n- [**Estimating g-Leakage via Machine Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.04399) (Romanelli et al., 2020)\n- [**Information Leakage in Embedding Models**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.00053) (Song and Raghunathan, 2020)\n- [**Hide-and-Seek Privacy Challenge**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.12087) (Jordan et al., 2020)\n- [**Synthetic Data -- Anonymisation Groundhog Day**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.07018) (Stadler et al., 2020) ([code](https:\u002F\u002Fgithub.com\u002Fspring-epfl\u002Fsynthetic_data_release))\n- [**Robust Membership Encoding: Inference Attacks and CopyrightProtection for Deep Learning**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.12982.pdf) (Song and Shokri, 2020)\n- [**Quantifying Privacy Leakage in Graph Embedding**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.00906) (Duddu et al., 2020)\n- [**Quantifying and Mitigating Privacy Risks of Contrastive Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.04140) (He and Zhang, 2021)\n- [**Coded Machine Unlearning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.15721) (Aldaghri et al., 2020)\n- [**Unlearnable Examples: Making Personal Data Unexploitable**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.04898) (Huang et al., 2021)\n- [**Measuring Data Leakage in Machine-Learning Models with Fisher Information**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.11673) (Hannun et al., 2021)\n- [**Teacher Model Fingerprinting Attacks Against Transfer Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.12478) (Chen et al, 2021)\n- [**Bounding Information Leakage in Machine Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.03875) (Del Grosso et al., 2021)\n- [**RoFL: Attestable Robustness for Secure Federated Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.03311) (Burkhalter et al., 2021)\n- [**Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06628) (Struppek et al., 2021)\n- [**The Privacy Onion Effect: Memorization is Relative**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.10469) (Carlini et al., 2022)\n- [**Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.00032) (Tramer et al., 2022)\n- [**LCANets++: Robust Audio Classification using Multi-layer Neural Networks with Lateral Competition**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.12882) (Dibbo et al., 2023)\n\n","# 机器学习隐私方面的强大攻击   [![Awesome](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstratosphereips_awesome-ml-privacy-attacks_readme_e4548dc55ca3.png\u002Fbadge.svg)](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstratosphereips_awesome-ml-privacy-attacks_readme_e4548dc55ca3.png)\n本仓库包含一份精心整理的、与针对机器学习的隐私攻击相关的论文列表。在作者提供的情况下，也会一并附上代码库。如发现任何错误、有改进建议或遗漏的论文，请直接提交问题或拉取请求。\n\n# 目录\n- [机器学习隐私方面的强大攻击   ![Awesome](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstratosphereips_awesome-ml-privacy-attacks_readme_e4548dc55ca3.png)](#机器学习隐私方面的强大攻击-img-srchttpsawesomerebadgesvg-altawesome)\n- [目录](#目录)\n- [综述与概述](#综述与概述)\n- [隐私测试工具](#隐私测试工具)\n- [论文与代码](#论文与代码)\n  - [成员身份推断](#成员身份推断)\n  - [重构](#重构)\n  - [属性推断\u002F分布推断](#属性推断)\n  - [模型提取](#模型提取)\n- [其他](#其他)\n\n# 综述与概述\n- [**SoK：模型逆向攻击全景图：分类、挑战及未来路线图**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10221914) (Sayanton Dibbo, 2023)\n- [**机器学习中的隐私攻击综述**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3624010) (Rigaki and Garcia, 2023)\n- [**机器学习中的隐私概述**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.08679) (De Cristofaro, 2020)\n- [**重新思考保护隐私的深度学习：如何评估并阻止隐私攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.11601) (Fan et al., 2020)\n- [**深度学习中的隐私与安全问题：综述**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9294026) (Liu et al., 2021)\n- [**ML-Doctor：针对机器学习模型推理攻击的整体风险评估**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.02551) (Liu et al., 2021)\n- [**机器学习中的成员身份推断攻击：综述**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.07853) (Hu et al., 2021)\n- [**综述：推理时的泄露与隐私**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.01614) (Jegorova et al., 2021)\n- [**嵌入式神经网络模型面临的机密性威胁综述**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.01401) (Joud et al., 2021)\n- [**联邦学习攻击再探讨：对漏洞、假设及评估设置的批判性讨论**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.03363) (Wainakh et al., 2021)\n- [**我知道你去年夏天训练了什么：关于窃取机器学习模型及其防御措施的综述**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.08451) (Oliynyk et al., 2022)\n\n# 隐私测试工具\n- [**PrivacyRaven**](https:\u002F\u002Fgithub.com\u002Ftrailofbits\u002FPrivacyRaven) (Trail of Bits)\n- [**TensorFlow Privacy**](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fprivacy\u002Ftree\u002Fmaster\u002Ftensorflow_privacy\u002Fprivacy\u002Fmembership_inference_attack) (TensorFlow)\n- [**机器学习隐私检测仪**](https:\u002F\u002Fgithub.com\u002Fprivacytrustlab\u002Fml_privacy_meter) (NUS数据隐私与可信机器学习实验室)\n- [**CypherCat（仅存档）**](https:\u002F\u002Fgithub.com\u002FLab41\u002Fcyphercat) (IQT Labs\u002F实验室41)\n- [**对抗鲁棒性工具箱（ART）**](https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002Fadversarial-robustness-toolbox) (IBM)\n \n# 论文与代码\n\n## 成员身份推断\n关于机器学习模型的成员身份推断论文精选列表（超过100篇）可在[此仓库](https:\u002F\u002Fgithub.com\u002FHongshengHu\u002Fmembership-inference-machine-learning-literature)中找到。\n- [**针对机器学习模型的成员身份推断攻击**](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=7958568)（Shokri等，2017年）（[代码](https:\u002F\u002Fgithub.com\u002Fcsong27\u002Fmembership-inference)）\n- [**理解在泛化良好的学习模型上的成员身份推断**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.04889)（Long等，2018年）\n- [**机器学习中的隐私风险：分析与过拟合的关系**](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8429311)，（Yeom等，2018年）（[代码](https:\u002F\u002Fgithub.com\u002Fsamuel-yeom\u002Fml-privacy-csf18)）\n- [**针对差分隐私深度学习模型的成员身份推断攻击**](http:\u002F\u002Fwww.tdp.cat\u002Fissues16\u002Ftdp.a289a17.pdf)（Rahman等，2018年）\n- [**深度学习的全面隐私分析：针对集中式和联邦学习的被动与主动白盒推断攻击**](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=8835245)（Nasr等，2019年）（[代码](https:\u002F\u002Fgithub.com\u002Fprivacytrustlab\u002Fml_privacy_meter)）\n- [**Logan：针对生成模型的成员身份推断攻击**](https:\u002F\u002Fcontent.sciendo.com\u002Fdownloadpdf\u002Fjournals\u002Fpopets\u002F2019\u002F1\u002Farticle-p133.xml)（Hayes等，2019年）（[代码](https:\u002F\u002Fgithub.com\u002Fjhayes14\u002Fgen_mem_inf)）\n- [**实践中评估差分隐私机器学习**](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fsec19-jayaraman.pdf)（Jayaraman和Evans，2019年）（[代码](https:\u002F\u002Fgithub.com\u002Fbargavj\u002FEvaluatingDPML)）\n- [**Ml-leaks：与模型和数据无关的机器学习模型成员身份推断攻击及防御**](https:\u002F\u002Fwww.ndss-symposium.org\u002Fwp-content\u002Fuploads\u002F2019\u002F02\u002Fndss2019_03A-1_Salem_paper.pdf)（Salem等，2019年）（[代码](https:\u002F\u002Fgithub.com\u002FAhmedSalem2\u002FML-Leaks)）\n- [**保护机器学习模型免受对抗样本攻击的隐私风险**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3319535.3354211)（Song L.等，2019年）（[代码](https:\u002F\u002Fgithub.com\u002Finspire-group\u002Fprivacy-vs-robustness)）\n- [**白盒与黑盒：成员身份推断的贝叶斯最优策略**](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Fsablayrolles19a.html)（Sablayrolles等，2019年）\n- [**解释机器学习模型的隐私风险**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.00164)（Shokri等，2019年）\n- [**揭秘作为服务的机器学习中的成员身份推断攻击**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F8634878)（Truex等，2019年）\n- [**针对生成模型的蒙特卡洛和重构成员身份推断攻击**](https:\u002F\u002Fcontent.sciendo.com\u002Fview\u002Fjournals\u002Fpopets\u002F2019\u002F4\u002Farticle-p232.xml)（Hilprecht等，2019年）\n- [**MemGuard：通过对抗样本防御黑盒成员身份推断攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.10594)（Jia等，2019年）（[代码](https:\u002F\u002Fgithub.com\u002Fjjy1994\u002FMemGuard)）\n- [**Gan-leaks：针对GAN的成员身份推断攻击分类**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.03935.pdf)（Chen等，2019年）\n- [**文本生成模型中的数据溯源审计**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3292500.3330885)（Song和Shmatikov，2019年）\n- [**序列到序列模型的成员身份推断攻击：我的数据是否在你的机器翻译系统中？**](https:\u002F\u002Fwww.mitpressjournals.org\u002Fdoi\u002Ffull\u002F10.1162\u002Ftacl_a_00299)（Hisamoto等，2020年）\n- [**在现实假设下重新审视成员身份推断**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.10881.pdf)（Jayaraman等，2020年）\n- [**当机器遗忘危及隐私时**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.02205.pdf)（Chen等，2020年）\n- [**机器学习中成员信息泄露的建模与量化**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2001.10648.pdf)（Farokhi和Kaafar，2020年）\n- [**机器学习模型隐私风险的系统性评估**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.10595)（Song和Mittal，2020年）（[代码](https:\u002F\u002Fgithub.com\u002Finspire-group\u002Fmembership-inference-evaluation)）\n- [**迈向深度模型成员身份推断的不可行性**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2005.13702.pdf)（Rezaei和Liu，2020年）（[代码](https:\u002F\u002Fgithub.com\u002Fshrezaei\u002FMI-Attack)）\n- [**被盗的记忆：利用模型记忆进行校准的白盒成员身份推断**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.11798)（Leino和Fredrikson，2020年）\n- [**仅标签成员身份推断攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.14321)（Choquette Choo等，2020年）\n- [**标签泄露：基于标签的成员身份推断攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.15528)（Li和Zhang，2020年）\n- [**通过因果学习缓解隐私攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.12732)（Tople等，2020年）\n- [**正则化对成员身份推断攻击的有效性**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.05336)（Kaya等，2020年）\n- [**采样攻击：重复查询对成员身份推断攻击的放大作用**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.00395)（Rahimian等，2020年）\n- [**分割泄露：语义图像分割中的成员身份推断攻击与防御**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.09685)（He等，2019年）\n- [**差分隐私防御与成员身份推断的采样攻击**](https:\u002F\u002Fpriml-workshop.github.io\u002Fpriml2019\u002Fpapers\u002FPriML2019_paper_47.pdf)（Rahimian等，2019年）\n- [**privGAN：以低成本保护GAN免受成员身份推断攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.00071)（Mukherjee等，2020年）\n- [**共享模型或核心集：基于成员身份推断攻击的研究**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.02977)（Lu等，2020年）\n- [**野外深度学习的隐私分析：针对迁移学习的成员身份推断攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.04872)（Zou等，2020年）\n- [**通过泛化差距及其他模型指标量化成员身份推断脆弱性**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.05669)（Bentley等，2020年）\n- [**MACE：生成模型中灵活的成员隐私估计框架**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.05683)（Liu等，2020年）\n- [**关于素数、对数损失分数与（无）隐私**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.08559)（Aggarwal等，2020年）\n- [**MCMIA：深度神经网络中对抗成员身份推断攻击的模型压缩**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2008.13578)（Wang等，2020年）\n- [**基于自助法的点式泛化成员身份推断攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.08738)（Felps等，2020年）\n- [**差分隐私学习并不能限制成员身份推断**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.12112)（Humphries等，2020年）\n- [**通过信息泄露量化成员隐私**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.05965)（Saeidian等，2020年）\n- [**不平等的脆弱性：关于机器学习隐私攻击的不公平性**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.00389)（Yaghini等，2020年）\n- [**以长矛为盾：一种新颖的基于对抗样本的隐私保护技术，用于防御成员身份推断攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.13696)（Xue等，2020年）\n- [**面向现实的成员身份推断：以调查数据为例**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3427228.3427282?casa_token=eHK7DPiTIigAAAAA:sinfqtYoQA8GddIiwn28DYNEG1NsvW42wvUnRLkpBGQKhrI_mawTRV8MOmLGotqaTspYS-eOIp56UQ)\n- [**由于查询的线性性质导致的差分隐私意外信息泄露**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.08958)（Huang等，2020年）\n- [**TransMIA：利用迁移影子训练进行成员身份推断攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.14661)（Hidano等，2020年）\n- [**扩展法诺不等式以刻画模型对成员身份推断攻击的敏感性**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.08097)（Jha等，2020年）\n- [**对手实例化：差分隐私机器学习的下界**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.04535)（Nasr等，2021年）\n- [**边缘智能中多等级服务模型的成员身份推断攻击**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9355044?casa_token=jQafisVyDTsAAAAA:hjGRYTW8BWFnmvVnXdsrv42HQoBJ0EdzBhJf34nAf2BWlydneTb6Ll8GVnVdGfI4M4S53r3hMw)（Wang等，2021年）\n- [**基于重建的成员身份推断攻击在复杂问题上更容易**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.07762)（Shafran等，2021年）\n- [**针对神经影像学深度回归模型的成员身份推断攻击**](https:\u002F\u002Fopenreview.net\u002Fforum?id=8lL_y9n-CV)（Gupta等，2021年）\n- [**针对图神经网络的节点级成员身份推断攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.05429)（He等，2021年）\n- [**通过差异比较进行实用的盲式成员身份推断攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.01341)（Hui等，2021年）\n- [**ADePT：基于自编码器的差分隐私文本转换**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.01502)（Krishna等，2021年）\n- [**联邦学习中的来源推断攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.05659)（Hu等，2021年）（[代码](https:\u002F\u002Fgithub.com\u002FHongshengHu\u002Fsource-inference-FL)）\n- [**dropout对差分隐私模型中成员身份推断的影响**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.09008)（Galinkin，2021年）\n- [**临床语言模型的成员身份推断攻击易感性**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.08305)（Jagannatha等，2021年）\n- [**知识图谱的成员身份推断攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.08273)（Wang和Sun，2021年）\n- [**数据增强何时有助于抵御成员身份推断攻击？**](http:\u002F\u002Fproceedings.mlr.press\u002Fv139\u002Fkaya21a.html)（Kaya和Dumitras，2021年）\n- [**训练参数和架构选择对神经网络成员身份推断攻击脆弱性的影响**](https:\u002F\u002Fwww.mi.fu-berlin.de\u002Finf\u002Fgroups\u002Fag-idm\u002Ftheseses\u002F2021_oussama_bouanani_bsc_thesis.pdf)（Bouanani，2021年）\n- [**词嵌入及更广泛领域的成员身份推断**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.11384)（Mahloujifar等，2021年）\n- [**TableGAN-MCA：评估GAN合成表格数据发布时的成员身份冲突**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.13190)（Hu等，2021年）\n- [**针对机器学习模型的增强型成员身份推断攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09679)（Ye等，2021年）\n- [**不要信任预测分数来进行成员身份推断攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.09076)（Hintersdorf等，2021年）\n- [**通过后门进行成员身份推断**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.04823)（Hu等，2022年）\n\n\n\n## 重建攻击\n重建攻击还包括被称为*模型反演*和*属性推断*的攻击。\n- [**药效基因组学中的隐私：个性化华法林剂量的端到端案例研究**](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fconference\u002Fusenixsecurity14\u002Fsec14-paper-fredrikson-privacy.pdf)（弗雷德里克森等，2014年）\n- [**利用置信度信息的模型反演攻击及基本防御措施**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F2810103.2813677)（弗雷德里克森等，2015年）([代码](https:\u002F\u002Fgithub.com\u002Fyashkant\u002FModel-Inversion-Attack))\n- [**用于形式化模型反演攻击的方法论**](https:\u002F\u002Fieeexplore.ieee.org\u002Fstamp\u002Fstamp.jsp?arnumber=7536387)（吴等，2016年）\n- [**对抗生成网络下的深度模型：协作式深度学习中的信息泄露**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3133956.3134012)（希塔伊等，2017年）\n- [**记住太多内容的机器学习模型**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3133956.3134077)（宋C.等，2017年）([代码](https:\u002F\u002Fgithub.com\u002Fcsong27\u002Fml-model-remember))\n- [**针对预测系统的模型反演攻击：无需非敏感属性知识**](https:\u002F\u002Fieeexplore.ieee.org\u002Fiel7\u002F8476191\u002F8476869\u002F08476925.pdf?casa_token=VQ_s2jcJFp8AAAAA:Hg-wdpPcESm9UUsZHxCLzIvYqVEqW11_OCXEyjARxW5K2cFYi6EFNXlH8IKKjNSgv6oQoQJlsw)（日田野等，2017年）\n- [**秘密分享者：评估与测试神经网络中的意外记忆现象**](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fsec19-carlini.pdf)（卡利尼等，2019年）\n- [**从梯度中泄漏的深度信息**](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F9617-deep-leakage-from-gradients.pdf)（朱等，2019年）([代码](https:\u002F\u002Fgithub.com\u002Fmit-han-lab\u002Fdlg))\n- [**针对协作推理的模型反演攻击**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3359789.3359824)（何等，2019年）([代码](https:\u002F\u002Fgithub.com\u002Fzechenghe\u002FInverse_Collaborative_Inference))\n- [**超越类代表推断：联邦学习中的用户级隐私泄露**](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8737416)（王等，2019年）\n- [**基于背景知识对齐的对抗环境下神经网络反演**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3319535.3354261?casa_token=lDNQ40-4Wa4AAAAA:p9olQ3qMdDZ0n2sl-nNIgk4sOuLRMBTGVTxycZ5wjGpnFPf5lTz-MYw0e8ISggSseHC9T46it5yX)（杨等，2019年）\n- [**iDLG：改进的梯度深度信息泄漏**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2001.02610)（赵等，2020年）([代码](https:\u002F\u002Fgithub.com\u002FPatrickZH\u002FImproved-Deep-Leakage-from-Gradients))\n- [**通用语言模型的隐私风险**](https:\u002F\u002Fwww.researchgate.net\u002Fprofile\u002FXudong_Pan3\u002Fpublication\u002F340965355_Privacy_Risks_of_General-Purpose_Language_Models\u002Flinks\u002F5ea7ca55a6fdccd7945b6a7d\u002FPrivacy-Risks-of-General-Purpose-Language-Models.pdf)（潘等，2020年）\n- [**秘密揭示者：针对深度神经网络的生成式模型反演攻击**](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2020\u002Fpapers\u002FZhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.pdf)（张等，2020年）\n- [**反演梯度——在联邦学习中破坏隐私到底有多容易？**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2003.14053)（盖平等，2020年）\n- [**GAMIN：一种针对黑盒模型反演的对抗方法**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1909.11835)（艾沃吉等，2019年）\n- [**用于信息混淆的对抗性表征学习的权衡与保证**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.07902)（赵等，2020年）\n- [**从损失函数重构训练样本**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.07337.pdf)（桑奈，2018年）\n- [**评估联邦学习中梯度泄漏攻击的框架**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2004.10397.pdf)（魏等，2020年）\n- [**GAN下的深度模型：协作式深度学习中的信息泄露**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1702.07464.pdf)（希塔伊等，2017年）\n- [**超越类代表推断：联邦学习中的用户级隐私泄露**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1812.00535.pdf)（王等，2018年）\n- [**探索深度学习计算卸载中的图像重构攻击**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3325413.3329791)（欧和李，2019年）\n- [**我知道你在看什么：卷积神经网络加速器上的功耗侧信道攻击**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1803.05847.pdf)（魏等，2019年）\n- [**更新泄漏：在线学习中的数据集推断与重构攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.01067)（萨勒姆等，2019年）\n- [**照亮黑暗：如何恢复本不应被看到的内容——基于特征提取的分类器**](https:\u002F\u002Feprint.iacr.org\u002F2018\u002F1001)（卡尔波夫等，2020年）\n- [**模型反演攻击的评估指标**](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1rl77BGtGHzZ8obWUEOoqunXCjgvpzE8d\u002Fview)（田中等，2020年）\n- [**理解联邦学习中的意外记忆现象**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.07490)（塔卡尔等，2020年）\n- [**基于攻击的差分隐私学习对模型反演攻击的评估方法**](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8822435)（朴等，2019年）\n- [**利用隐私导向训练降低模型反演风险**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.15877)（戈尔德斯汀等，2020年）\n- [**抵御模型反演攻击的稳健透明度**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9178452)（阿卢费桑等，2020年）\n- [**人工智能会记忆吗？神经网络与被遗忘权**](https:\u002F\u002Fuwspace.uwaterloo.ca\u002Fhandle\u002F10012\u002F15754)（格雷夫斯等，2020年）\n- [**通过互信息正则化提升对模型反演攻击的鲁棒性**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.05241)（王等，2020年）\n- [**SAPAG：一种自适应的梯度隐私攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.06228)（王等，2020年）\n- [**基于理论的线性方程求解器实现的梯度深度信息泄漏**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.13356)（潘等，2020年）\n- [**改进的模型反演攻击技术**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.04092)（陈等，2020年）\n- [**针对分类模型的黑盒模型反演属性推断攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.03404)（梅赫纳兹等，2020年）\n- [**深度人脸识别隐私攻击：由深度生成对抗数据空间判别器初始化的模型反演**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9306253?casa_token=H78uIRJ2smYAAAAA:iQiA_5d2a2mbH4oBF9EZwSjakAz3Muq3ZOkNDBkK_fLq19PEMGEvpipyli7d9SGKESglqIb9Ug)（霍斯拉维等，2020年）\n- [**MixCon：调整数据表示的可分离性以使数据恢复更加困难**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11463)（李等，2020年）\n- [**面向医疗数据的深度学习推理攻击模型评估**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.00177)（吴等，2020年）\n- [**FaceLeaks：通过黑盒查询对迁移学习模型进行的推理攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.14023)（刘和高桥，2020年）\n- [**从大型语言模型中提取训练数据**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.07805)（卡利尼等，2020年）\n- [**MIDAS：使用近似内存系统进行模型反演防御**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9358254)（徐等，2021年）\n- [**KART：基于临床记录预训练的语言模型隐私泄露框架**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.00036)（中村等，2020年）\n- [**从机器学习模型中推导约束及其在安全与隐私中的应用**](https:\u002F\u002Fhal.archives-ouvertes.fr\u002Fhal-03091740\u002F)（法拉斯基等，2021年）\n- [**关于机器学习模型属性推断攻击的可行性探讨**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.07101)（赵等，2021年）\n- [**针对分裂神经网络的实用模型反演攻击防御措施**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.05743)（蒂特科姆等，2021年）\n- [**R-GAP：针对隐私的递归梯度攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.07733)（朱和布拉斯科，2021年）\n- [**利用解释性信息进行模型反演攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.12669)（赵等，2021年）\n- [**SAFELearn：用于私密联邦学习的安全聚合**](https:\u002F\u002Fencrypto.de\u002Fpapers\u002FFMMMMNRSSYZ21.pdf)（费雷多尼等，2021年）\n- [**基于临床笔记预训练的BERT是否会泄露敏感数据？**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.07762)（莱曼等，2021年）\n- [**语言模型中的训练数据泄露分析**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.05405)（伊南等，2021年）\n- [**通过模型碎片化、洗牌与聚合缓解联邦学习中的模型反演问题**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9478813?casa_token=047c6zFuwm4AAAAA:h6qWPCm6WXUbtVgk1iATPshiPMfvGEp6lVUrblEm8P2tRX4OIDEDpnzICVwYveoENEnH6Ig-yg)（马苏德等，2021年）\n- [**PRECODE——一种防止深度梯度泄漏的通用模型扩展**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.04725)（谢利加等，2021年）\n- [**加密深度特征的重要性**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2108.07147)（倪等，2021年）\n- [**用对抗样本防御模型反演攻击**](https:\u002F\u002Fwww.cs.hku.hk\u002Fdata\u002Ftechreps\u002Fdocument\u002FTR-2021-03.pdf)（温等，2021年）\n- [**透过梯度看世界：基于GradInversion的图像批次恢复**](https:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent\u002FCVPR2021\u002Fpapers\u002FYin_See_Through_Gradients_Image_Batch_Recovery_via_GradInversion_CVPR_2021_paper.pdf)（尹等，2021年）\n- [**变分模型反演攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.10787)（王等，2021年）\n- [**在知情对手协助下重构训练数据**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.04845)（巴列等，2022年）\n- [**即插即用式攻击：迈向稳健且灵活的模型反演攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.12179)（斯特鲁佩克等，2022年）\n- [**无数据模式下的模型反演攻击对分割计算的隐私威胁**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.06304)（董等，2022年）\n- [**针对合成数据的属性推断攻击的线性重构方法**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2301.10053)（安纳马莱等，2023年）\n- [**模型反演攻击中隐藏信息的分析与利用**](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F10184490)（张等，2023年）([代码](https:\u002F\u002Fgithub.com\u002Fzhangzp9970\u002FAmplified-MIA))\n- [**文本嵌入所揭示的信息几乎与原文一样多**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2310.06816?ref=upstract.com)（莫里斯等，2023年）\n- [**关于基于相似性的隐私度量不足：针对“真正匿名合成数据”的重构攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05114)（加涅夫和德克里斯托法罗，2023年）\n- [**信息最少的模型反演攻击及其差异性脆弱性的深入分析**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10136179)（迪博等，2023年）\n\n## 属性推断 \u002F 分布推断\n- [**用更智能的机器攻破智能机器：如何从机器学习分类器中提取有意义的数据**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1504\u002FIJSN.2015.071829)（Ateniese 等，2015）\n- [**利用置换不变表示对全连接神经网络进行属性推断攻击**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3243734.3243834)（Ganju 等，2018）\n- [**利用协作学习中的非预期特征泄露**](https:\u002F\u002Fieeexplore.ieee.org\u002Fiel7\u002F8826229\u002F8835208\u002F08835269.pdf)（Melis 等，2019）（[代码](https:\u002F\u002Fgithub.com\u002Fcsong27\u002Fproperty-inference-collaborative-ml)）\n- [**过拟合揭示敏感属性**](https:\u002F\u002Fopenreview.net\u002Fpdf?id=SJeNz04tDS)（Song C. 等，2020）（[代码](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1hu0PhN3pWXe6LobxiPFeYBm8L-vQX2zJ\u002Fview?usp=sharing)）\n- [**协作学习中的主体属性推断攻击**](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F9204357)（Xu 和 Li，2020）\n- [**基于投毒的属性推断**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.11073)（Chase 等，2021）\n- [**卷积神经网络上的属性推断攻击：目标模型复杂度的影响与启示**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.13061)（Parisot 等，2021）\n- [**诚实但好奇的网络：私有输入的敏感属性可被秘密编码到分类器输出的熵中**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.12049)（Malekzadeh 等，2021）（[代码](https:\u002F\u002Fgithub.com\u002Fmmalekzadeh\u002Fhonest-but-curious-nets)）\n- [**针对生成对抗网络的属性推断攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.07608)（Zhou 等，2021）（[代码](https:\u002F\u002Fgithub.com\u002FZhou-Junhao\u002FPIA_GAN)）\n- [**形式化并估计分布推断风险**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2109.06024)（Suri 和 Evans，2022）（[代码](https:\u002F\u002Fgithub.com\u002Fiamgroot42\u002FFormEstDistRisks)）\n- [**剖析分布推断**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10136142)（Suri 等，2023）（[代码](https:\u002F\u002Fgithub.com\u002Fiamgroot42\u002Fdissecting_dist_inf)）\n- [**SNAP：通过投毒高效提取隐私属性**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F10179334)（Chaudhari 等，2023）（[代码](https:\u002F\u002Fgithub.com\u002Fjohnmath\u002Fsnap-sp23)）\n\n## 模型提取\n- [**通过预测 API 盗取机器学习模型**](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fconference\u002Fusenixsecurity16\u002Fsec16_paper_tramer.pdf)（Tramèr 等，2016）([代码](https:\u002F\u002Fgithub.com\u002Fftramer\u002FSteal-ML))\n- [**盗取机器学习中的超参数**](https:\u002F\u002Fieeexplore.ieee.org\u002Fiel7\u002F8418581\u002F8418583\u002F08418595.pdf)（Wang B. 等，2018）\n- [**Copycat CNN：利用随机未标注数据诱供以窃取知识**](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8489592)（Correia-Silva 等，2018）([代码](https:\u002F\u002Fgithub.com\u002Fjeiks\u002FStealing_DL_Models))\n- [**迈向黑盒神经网络的逆向工程**](https:\u002F\u002Fopenreview.net\u002Fforum?id=BydjJte0-)（Oh 等，2018）([代码](https:\u002F\u002Fgithub.com\u002Fcoallaoh\u002FWhitenBlackBox))\n- [**Knockoff Nets：窃取黑盒模型的功能**](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_CVPR_2019\u002Fpapers\u002FOrekondy_Knockoff_Nets_Stealing_Functionality_of_Black-Box_Models_CVPR_2019_paper.pdf)（Orekondy 等，2019）([代码](https:\u002F\u002Fgithub.com\u002Ftribhuvanesh\u002Fknockoffnets))\n- [**PRADA：防御 DNN 模型窃取攻击**](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8806737)（Juuti 等，2019）([代码](https:\u002F\u002Fgithub.com\u002FSSGAalto\u002Fprada-protecting-against-dnn-model-stealing-attacks))\n- [**从模型解释中重建模型**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fabs\u002F10.1145\u002F3287560.3287562)（Milli 等，2019）\n- [**探索主动学习与模型提取之间的联系**](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fsec20summer_chandrasekaran_prepub.pdf)（Chandrasekaran 等，2020）\n- [**高精度、高保真度的神经网络提取**](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fusenixsecurity20\u002Fpresentation\u002Fjagielski)（Jagielski 等，2020）\n- [**芝麻街上的窃贼！基于 BERT 的 API 的模型提取**](https:\u002F\u002Fopenreview.net\u002Fattachment?id=Byl5NREFDr&name=original_pdf)（Krishna 等，2020）([代码](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Flanguage\u002Ftree\u002Fmaster\u002Flanguage\u002Fbert_extraction))\n- [**神经网络模型的密码分析式提取**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2003.04884.pdf)（Carlini 等，2020）\n- [**CloudLeak：通过对抗样本大规模窃取深度学习模型**](https:\u002F\u002Fwww.ndss-symposium.org\u002Fndss-paper\u002Fcloudleak-large-scale-deep-learning-models-stealing-through-adversarial-examples\u002F)（Yu 等，2020）\n- [**ACTIVETHIEF：利用主动学习和无标注公开数据进行模型提取**](https:\u002F\u002Fwww.aaai.org\u002Fojs\u002Findex.php\u002FAAAI\u002Farticle\u002Fview\u002F5432)（Pal 等，2020）([代码](https:\u002F\u002Fbitbucket.org\u002Fiiscseal\u002Factivethief\u002Fsrc\u002Fmaster))\n- [**高效窃取你的机器学习模型**](https:\u002F\u002Fencrypto.de\u002Fpapers\u002FRST19.pdf)（Reith 等，2019）\n- [**复杂 DNN 模型的提取：真实威胁还是危言耸听？**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1910.05429.pdf)（Atli 等，2020）\n- [**通过时序侧信道窃取神经网络**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1812.11720.pdf)（Duddu 等，2019）\n- [**DeepSniffer：基于学习架构线索的 DNN 模型提取框架**](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002Fpdf\u002F10.1145\u002F3373376.3378460)（Hu 等，2020）([代码](https:\u002F\u002Fgithub.com\u002Fxinghu7788\u002FDeepSniffer))\n- [**CSI NN：通过电磁侧信道逆向设计神经网络架构**](https:\u002F\u002Fwww.usenix.org\u002Fsystem\u002Ffiles\u002Fsec19-batina.pdf)（Batina 等，2019）\n- [**缓存心灵感应：利用共享资源攻击学习 DNN 架构**](https:\u002F\u002Fwww.usenix.org\u002Fconference\u002Fusenixsecurity20\u002Fpresentation\u002Fyan)（Yan 等，2020）\n- [**如何在业余时间攻陷 NAS**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2002.06776)（Hong 等，2020）([代码](https:\u002F\u002Fgithub.com\u002FSanghyun-Hong\u002FHow-to-0wn-NAS-in-Your-Spare-Time))\n- [**缓存侧信道攻击下运行的深度神经网络的安全性分析**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.03487)（Hong 等，2020）\n- [**深度 ReLU 网络的逆向工程**](https:\u002F\u002Fproceedings.icml.cc\u002Fstatic\u002Fpaper_files\u002Ficml\u002F2020\u002F1-Paper.pdf)（Rolnick 和 Kording，2020）\n- [**面向模型提取的数据发布与 k 匿名化**](https:\u002F\u002Flink.springer.com\u002Fchapter\u002F10.1007\u002F978-3-030-58208-1_13)（Fukuoka 等，2020）\n- [**Hermes 攻击：以无损推理精度窃取 DNN 模型**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.12784)（Zhu 等，2020）\n- [**从反事实解释中提取模型**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.01884)（Aïvodji 等，2020）([代码](https:\u002F\u002Fgithub.com\u002Faivodji\u002Fmrce))\n- [**MetaSimulator：为查询高效的黑盒攻击模拟未知目标模型**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.00960)（Chen 和 Yong，2020）([代码](https:\u002F\u002Fgithub.com\u002Fmachanic\u002FMetaSimulator))\n- [**预测中毒：迈向防御 DNN 模型窃取攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.10908)（Orekondy 等，2019）([代码](https:\u002F\u002Fgithub.com\u002Ftribhuvanesh\u002Fprediction-poisoning))\n- [**IReEn：通过神经程序合成迭代逆向设计黑盒函数**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.10720)（Hajipour 等，2020）\n- [**ES 攻击：无需数据障碍即可对深度神经网络实施模型窃取**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.09560)（Yuan 等，2020）\n- [**黑盒拆解者：利用生成式进化算法复制黑盒模型**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.11158)（Barbalau 等，2020）([代码](https:\u002F\u002Fgithub.com\u002Fantoniobarbalau\u002Fblack-box-ripper))\n- [**图神经网络的模型提取攻击：分类与实现**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.12751)（Wu 等，2020）\n- [**云上机器学习模型的模型提取攻击与防御**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9311938?casa_token=f_8Lg24vAQkAAAAA:A7P5ym7bTLFIJZtL2yGorscyQC2R1WGJUKzcO-pn8wADHus0w8NArN-nv0JFcKYhwwQFeCaptQ)（Gong 等，2020）\n- [**利用提取的模型对手改进黑盒攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.16336)（Nizar 和 Kobren，2020）\n- [**针对模型提取攻击的差分隐私机器学习模型**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9291542)（Cheng 等，2020）\n- [**云上机器学习模型的模型提取攻击与防御**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9311938?casa_token=YP1PeB4XPqEAAAAA:q1Ni88642UTpBQ8r7jUe9tbWjMWG9lw3v8CK4g1q7V-ZShK0KonYuMiapY4rXDfKVNST6xLtSw)（Gong 等，2020）\n- [**通过扫描链窃取神经网络模型：ML 硬件面临的新威胁**](https:\u002F\u002Feprint.iacr.org\u002F2021\u002F167)（Potluri 和 Aysu，2021）\n- [**生成对抗网络的模型提取与防御**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.02069)（Hu 和 Pang，2021）\n- [**用差分隐私扰动保护机器学习模型的决策边界**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9286504)（Zheng 等，2021）\n- [**专用模型提取攻击：以更少的查询窃取粗略模型**](https:\u002F\u002Fieeexplore.ieee.org\u002Fabstract\u002Fdocument\u002F9343086?casa_token=Fn4CtwOZsbQAAAAA:4n3tZGcwFochwREqn4fRWcmA9YeLRxikwB1LN8t2ui1NbRPHSHjTuoqHrSfP1vxXfecw0kobBQ)（Okada 等，2021）\n- [**模型提取与对抗迁移性：你的 BERT 很脆弱！**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.10013)（He 等，2021）([代码](https:\u002F\u002Fgithub.com\u002Fxlhex\u002Fextract_and_transfer))\n- [**窃贼啊，小心你得到的东西：迈向对模型提取攻击的理解**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.05921)（Zhang 等，2021）\n- [**仅凭噪声输入就能窃取模型权重：一个任性攻击者的奇特案例**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.08987)（Roberts 等，2019）\n- [**用多样化模型集成保护 DNN 免遭窃取**](https:\u002F\u002Fopenreview.net\u002Fforum?id=LucJxySuJcE)（Kariyappa 等，2021）\n- [**用于模型隐私的信息清洗**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.06112)（Wang 等，2021）\n- [**通过可配置的对抗样本进行深度神经网络指纹识别**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.00888)（Lukas 等，2021）\n- [**BODAME：用于防御模型提取的双层优化**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2103.06797)（Mori 等，2021）\n- [**数据集推断：机器学习中的所有权判定**](https:\u002F\u002Fopenreview.net\u002Fforum?id=hvdKKV2yt7T)（Maini 等，2021）\n- [**好的艺术家临摹，伟大的艺术家窃取：针对图像翻译生成对抗网络的模型提取攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2104.12623)（Szyller 等，2021）\n- [**迈向对模型提取查询的特征化及其检测方法**](https:\u002F\u002Fwww2.eecs.berkeley.edu\u002FPubs\u002FTechRpts\u002F2021\u002FEECS-2021-126.pdf)（Zhang 等，2021）\n- [**样本难度就是一切：用样本难度保护深度学习模型**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.11424)（Sadeghzadeh 等，2021）\n- [**模型提取攻击的状态感知检测**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.05166)（Pal 等，2021）\n- [**MEGEX：针对基于梯度的可解释 AI 的无数据模型提取攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.08909)（Miura 等，2021）\n- [**INVERSENET：将训练数据反转用于增强模型提取攻击**](https:\u002F\u002Fwww.ijcai.org\u002Fproceedings\u002F2021\u002F0336.pdf)（Gong 等，2021）\n- [**用校准的工作量证明提高模型提取的成本**](https:\u002F\u002Fopenreview.net\u002Fforum?id=EAy7C1cgE1L)（Dziedzic 等，2022）[代码](https:\u002F\u002Fgithub.com\u002Fcleverhans-lab\u002Fmodel-extraction-iclr)\n- [**自监督学习抵御模型提取的难度探讨**](https:\u002F\u002Fproceedings.mlr.press\u002Fv162\u002Fdziedzic22a\u002Fdziedzic22a.pdf)（Dziedzic 等，2022）[代码](https:\u002F\u002Fgithub.com\u002Fcleverhans-lab\u002Fssl-attacks-defenses)\n- [**自监督模型的数据集推断**](https:\u002F\u002Fproceedings.neurips.cc\u002Fpaper_files\u002Fpaper\u002F2022\u002Fhash\u002F4ebf0617b32da2cd083c3b17c7285cce-Abstract-Conference.html)（Dziedzic 等，2022）[代码](https:\u002F\u002Fgithub.com\u002Fcleverhans-lab\u002FDatasetInferenceForSelfSupervisedModels)\n- [**偷不了？那就对比着偷吧！针对图像编码器的对比式窃取攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.07513)（Sha 等，2022）\n- [**StolenEncoder：窃取预训练编码器**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.05889)（Liu 等，2022）\n- [**模型提取攻击再审视**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2312.05386)（Liang 等，2023）\n\n# 其他\n- [**提示不应被视为秘密：系统性地衡量提示提取攻击的成功率**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2307.06865)（Zhang 等，2023）\n- [**失忆机器学习**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.10981)（Graves 等，2020）\n- [**迈向联邦学习中的鲁棒性和隐私保护：本地与中心化差分隐私的实验研究**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2009.03561)（Naseri 等，2020）\n- [**分析自然语言模型更新的信息泄露**](https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.07942)（Brockschmidt 等，2020）\n- [**通过机器学习估计 g-泄漏**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.04399)（Romanelli 等，2020）\n- [**嵌入模型中的信息泄露**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2004.00053)（Song 和 Raghunathan，2020）\n- [**捉迷藏隐私挑战**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.12087)（Jordan 等，2020）\n- [**合成数据——匿名化的“土拨鼠之日”**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2011.07018)（Stadler 等，2020）（[代码](https:\u002F\u002Fgithub.com\u002Fspring-epfl\u002Fsynthetic_data_release)）\n- [**鲁棒的成员身份编码：深度学习中的推理攻击与版权保护**](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.12982.pdf)（Song 和 Shokri，2020）\n- [**图嵌入中隐私泄露的量化**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2010.00906)（Duddu 等，2020）\n- [**对比学习的隐私风险量化与缓解**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.04140)（He 和 Zhang，2021）\n- [**编码式机器遗忘**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2012.15721)（Aldaghri 等，2020）\n- [**不可遗忘的样本：使个人数据无法被利用**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2101.04898)（Huang 等，2021）\n- [**利用费希尔信息测量机器学习模型中的数据泄露**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2102.11673)（Hannun 等，2021）\n- [**针对迁移学习的教师模型指纹攻击**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2106.12478)（Chen 等，2021）\n- [**机器学习中信息泄露的界值估计**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2105.03875)（Del Grosso 等，2021）\n- [**RoFL：用于安全联邦学习的可证明鲁棒性**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.03311)（Burkhalter 等，2021）\n- [**学习破解深度感知哈希：以 NeuralHash 为例**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2111.06628)（Struppek 等，2021）\n- [**隐私洋葱效应：记忆是相对的**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.10469)（Carlini 等，2022）\n- [**真相血清：通过投毒揭示机器学习模型的秘密**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.00032)（Tramer 等，2022）\n- [**LCANets++：基于多层神经网络和侧向竞争的鲁棒音频分类**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2308.12882)（Dibbo 等，2023）","# awesome-ml-privacy-attacks 快速上手指南\n\n`awesome-ml-privacy-attacks` 并非一个单一的可安装软件包，而是一个精心整理的**资源列表仓库**。它汇集了关于机器学习隐私攻击（如成员推断、模型提取、数据重建等）的学术论文、开源代码实现以及测试工具。\n\n本指南将帮助你如何利用该仓库快速找到所需的攻击代码或测试工具，并运行其中一个典型的示例。\n\n## 环境准备\n\n由于该仓库包含多种不同论文的代码实现，具体的系统要求取决于你选择运行的特定项目。但大多数基于 Python 的机器学习隐私攻击工具通常具备以下通用前置依赖：\n\n*   **操作系统**: Linux (推荐 Ubuntu 20.04+), macOS, 或 Windows (WSL2 推荐)\n*   **Python 版本**: Python 3.8 - 3.10 (具体视目标项目而定)\n*   **核心依赖库**:\n    *   `PyTorch` 或 `TensorFlow` (深度学习框架)\n    *   `NumPy`, `Pandas`, `Scikit-learn`\n    *   `Git` (用于克隆仓库)\n\n**建议**: 在开始之前，请确保已安装 `conda` 或 `venv` 以创建独立的虚拟环境，避免依赖冲突。\n\n## 安装步骤\n\n由于这是一个资源索引库，你不需要“安装”它本身，而是需要克隆仓库并从中获取具体的工具代码。\n\n### 1. 克隆仓库\n\n使用 Git 克隆该资源列表到本地：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FHongshengHu\u002Fawesome-ml-privacy-attacks.git\ncd awesome-ml-privacy-attacks\n```\n\n*(国内用户若下载缓慢，可使用镜像加速：)*\n```bash\ngit clone https:\u002F\u002Fgitee.com\u002Fmirror\u002Fawesome-ml-privacy-attacks.git\n# 注意：如果 Gitee 没有同步镜像，建议使用 git clone --depth=1 \u003Cgithub_url> 减少下载量\n```\n\n### 2. 选择并安装具体工具\n\n浏览仓库中的 `README.md` 或 `Papers and Code` 部分，找到你感兴趣的具体攻击工具（例如 `PrivacyRaven` 或某篇论文的实现代码）。\n\n以 **PrivacyRaven** (一个综合性的隐私测试工具) 为例，安装步骤如下：\n\n```bash\n# 进入 PrivacyRaven 目录 (假设已从主列表链接跳转或单独克隆)\ngit clone https:\u002F\u002Fgithub.com\u002Ftrailofbits\u002FPrivacyRaven.git\ncd PrivacyRaven\n\n# 创建并激活虚拟环境\npython3 -m venv venv\nsource venv\u002Fbin\u002Factivate  # Windows 用户使用: venv\\Scripts\\activate\n\n# 安装依赖\npip install -r requirements.txt\n# 或者直接从 PyPI 安装\npip install privacyraven\n```\n\n*(注：若需使用国内镜像源加速 pip 安装，请添加 `-i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple` 参数)*\n\n## 基本使用\n\n以下以 **PrivacyRaven** 为例，展示如何对一个简单的机器学习模型发起成员推断攻击（Membership Inference Attack）。\n\n### 1. 准备目标模型\n首先，你需要有一个训练好的模型（例如使用 PyTorch 或 TensorFlow 训练的模型），并将其保存为文件，或者在代码中直接定义。\n\n### 2. 运行攻击脚本\n创建一个名为 `attack_example.py` 的文件，编写以下最小化示例代码：\n\n```python\nimport privacyraven\nfrom privacyraven.core import MembershipInferenceAttack\nfrom privacyraven.models import TargetModel\n\n# 1. 定义或加载你的目标模型 (此处仅为伪代码示意，需替换为真实模型加载逻辑)\n# target_model = load_my_pytorch_model()\nclass SimpleTarget(TargetModel):\n    def predict(self, x):\n        # 返回模型的预测概率\n        return model(x)\n\ntarget = SimpleTarget()\n\n# 2. 初始化成员推断攻击\n# shadow_model_type: 影子模型类型，'mlp' 表示多层感知机\nattack = MembershipInferenceAttack(\n    target_model=target,\n    shadow_model_type='mlp',\n    num_shadow_models=5,\n    attack_model_type='nn'\n)\n\n# 3. 执行攻击\n# data_loader: 你的测试数据加载器\n# results 将包含攻击成功率及每个样本的推断结果\nresults = attack.run(data_loader=test_loader)\n\nprint(f\"Attack Success Rate: {results.success_rate}\")\nprint(results.detailed_results)\n```\n\n### 3. 查看结果\n运行脚本：\n\n```bash\npython attack_example.py\n```\n\n控制台将输出攻击的成功率以及模型对特定数据点的记忆程度分析。你可以参考仓库中 `Papers and Code` 章节下的其他链接，获取针对特定攻击类型（如模型提取、数据重建）的更详细代码实现。","某金融科技公司正在开发一款基于深度学习的信贷审批模型，团队需要在模型上线前评估其是否存在泄露用户敏感训练数据的风险。\n\n### 没有 awesome-ml-privacy-attacks 时\n- 安全工程师难以系统性地了解最新的隐私攻击手段（如成员推断、模型反演），只能零散地搜索论文，极易遗漏关键威胁。\n- 缺乏权威的测试工具清单，团队不得不自行复现学术论文代码，导致环境配置耗时数周且常因代码缺失而失败。\n- 无法快速找到针对特定场景（如联邦学习或嵌入式模型）的攻击案例，导致风险评估覆盖不全，留下安全盲区。\n- 面对复杂的攻击分类体系，团队内部对“属性推断”与“分布推断”等概念理解混乱，难以制定统一的防御标准。\n\n### 使用 awesome-ml-privacy-attacks 后\n- 研究人员通过\"Surveys and Overviews\"板块快速掌握了从基础成员推断到复杂模型提取的完整攻击图谱，建立了系统的威胁认知。\n- 直接调用列表中集成的成熟工具（如 PrivacyRaven、TensorFlow Privacy 和 ART），在两天内便搭建起自动化隐私压力测试流水线。\n- 依据分类清晰的论文索引，精准定位到与信贷场景高度相关的过拟合关联攻击研究，实现了对核心风险点的定向排查。\n- 团队利用列表中提供的代码仓库链接，快速复现了主流攻击算法，量化了模型泄露风险并据此优化了差分隐私参数。\n\nawesome-ml-privacy-attacks 将分散的学术成果转化为可落地的防御武器库，帮助团队在模型发布前高效识别并修补隐私漏洞。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fstratosphereips_awesome-ml-privacy-attacks_2f4498d2.png","stratosphereips","Stratosphere IPS","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fstratosphereips_72d9ce2c.png","Cybersecurity Research Laboratory at the Czech Technical University in Prague. Creators of Slips, a free software machine learning-based behavioral IDS\u002FIPS.",null,"StratosphereIPS","https:\u002F\u002Fwww.stratosphereips.org","https:\u002F\u002Fgithub.com\u002Fstratosphereips",634,92,"2026-04-02T20:23:23",1,"","未说明",{"notes":87,"python":85,"dependencies":88},"该仓库是一个关于机器学习隐私攻击的论文和工具清单（Awesome List），本身不是一个可直接运行的单一软件工具。其中列出的各个具体项目（如 PrivacyRaven, TensorFlow Privacy, ART 等）有各自独立的运行环境要求，需参考对应项目的链接获取详细信息。",[],[14],[91,92,93,94],"awesome-list","awesome","machine-learning","privacy","2026-03-27T02:49:30.150509","2026-04-07T02:28:11.088825",[],[]]