[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-hindupuravinash--nips2017":3,"tool-hindupuravinash--nips2017":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":75,"owner_website":82,"owner_url":83,"languages":81,"stars":84,"forks":85,"last_commit_at":86,"license":81,"difficulty_score":87,"env_os":88,"env_gpu":89,"env_ram":89,"env_deps":90,"category_tags":93,"github_topics":94,"view_count":23,"oss_zip_url":81,"oss_zip_packed_at":81,"status":16,"created_at":99,"updated_at":100,"faqs":101,"releases":117},3640,"hindupuravinash\u002Fnips2017","nips2017","A list of resources for all invited talks, tutorials, workshops and presentations at NIPS 2017","nips2017 是一个专为 2017 年神经信息处理系统大会（NIPS）打造的开源资源汇总库。鉴于该年度会议规模空前，演讲、教程和研讨会内容极其丰富但分散，nips2017 致力于解决参会者与研究者难以系统性获取完整会议资料（如幻灯片、视频回放及代码实现）的痛点。它将原本零散的信息整合为结构清晰的清单，涵盖了所有特邀演讲、前沿教程及专题研讨会的具体链接。\n\n该项目特别适合人工智能领域的研究人员、开发者以及希望深入理解行业趋势的学生使用。无论是想回顾 John Platt 关于未来百年发展的宏观展望，还是钻研深度学习在机器人领域的应用细节，亦或是学习最优传输、公平性机器学习等硬核技术教程，用户都能在此快速定位所需资源。其独特的技术亮点在于不仅提供了视频链接，还尽可能收录了演讲者的原始幻灯片与相关代码仓库，极大地降低了复现算法和学习前沿理论的时间成本。作为一个由社区共同维护的开放项目，nips2017 以友好的协作方式，成为了连接全球 AI 学者与宝贵知识资产的桥梁，帮助用户高效汲取顶级学术会议的精华。","# NIPS 2017\n\n\u003Cp align=\"center\">\u003Cimg width=\"50%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhindupuravinash_nips2017_readme_51f52dabc008.jpg\" \u002F>\u003C\u002Fp>\n\nThis year's Neural Information Processing Systems (NIPS) 2017 conference held at Long Beach Convention Center, Long Beach California has been the biggest ever! Here's a list of resources and slides of all invited talks, tutorials and workshops.\n\nContributions are welcome. You can add links via pull requests or create an issue to lemme know something I missed or to start a discussion. If you know the speakers, please ask them to upload slides online!\n\nCheck out [Deep Hunt](https:\u002F\u002Fwww.deephunt.in) - a curated monthly AI newsletter for this repo as a [blog post](https:\u002F\u002Fdeephunt.in\u002Fnips-2017-e580ebc9c7b2) and follow me on [Twitter](https:\u002F\u002Fwww.twitter.com\u002Fhindupuravinash).\n\n## Contents\n\n- [Invited Talks](#invited-talks)\n\n- [Tutorials](#tutorials)\n\n- [Workshops](#workshops)\n\n- [WiML](#wiml)\n\n\n## Invited Talks\n\n- **Powering the next 100 years**\n\n  John Platt\n\n  Slides · [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=HL60wgrT67k) · Code\n\n- **Why AI Will Make it Possible to Reprogram the Human Genome**\n\n  Brendan J Frey\n\n  [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=QJLQBSQJEus)\n\n- **The Trouble with Bias**\n\n  Kate Crawford\n\n  [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fMym_BKWQzk)\n\n- **The Unreasonable Effectiveness of Structure**\n\n  Lise Getoor\n\n  Slides · [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=t4k5LKCpboc)\n\n- **Deep Learning for Robotics**\n\n  Pieter Abbeel\n\n  [Slides](https:\u002F\u002Fwww.dropbox.com\u002Fs\u002Ffdw7q8mx3x4wr0c\u002F2017_12_xx_NIPS-keynote-final.pdf) · [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=po9z_tMuEwE) · Code\n\n- **Learning State Representations**\n\n  Yael Niv\n  \n  [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=FhOwFDGm0d4)\n\n- **On Bayesian Deep Learning and Deep Bayesian Learning**\n\n  Yee Whye Teh\n\n  [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=9saauSBgmcQ)\n\n## Tutorials\n\n- **Deep Learning: Practice and Trends**\n\n  Nando de Freitas · Scott Reed · Oriol Vinyals\n\n  [Slides](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1SuwiICLERd7SfYo3FiqNG0tCEBUjKcT7\u002Fview) · [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=YJnddoa8sHk) · Code\n\n- **Reinforcement Learning with People**\n\n  Emma Brunskill\n\n  Slides · [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=TqT9nIx27Eg) · Code\n\n- **A Primer on Optimal Transport**\n\n  Marco Cuturi · Justin M Solomon\n\n  [Slides](https:\u002F\u002Fwww.dropbox.com\u002Fs\u002F55tb2cf3zipl6xu\u002FaprimeronOT.pdf) · Video · Code\n\n- **Deep Probabilistic Modelling with Gaussian Processes**\n\n  Neil D Lawrence\n\n  [Slides](http:\u002F\u002Finverseprobability.com\u002Ftalks\u002Flawrence-nips17\u002Fdeep-probabilistic-modelling-with-gaussian-processes.html) · [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=RAiPlfohjJo) · Code    \n\n- **Fairness in Machine Learning**\n\n  Solon Barocas · Moritz Hardt\n\n  [Slides](http:\u002F\u002Fmrtz.org\u002Fnips17\u002F#\u002F) · Video · Code\n\n- **Statistical Relational Artificial Intelligence: Logic, Probability and Computation**\n\n  Luc De Raedt · David Poole · Kristian Kersting · Sriraam Natarajan\n\n  Slides · Video · Code\n\n- **Engineering and Reverse-Engineering Intelligence Using Probabilistic Programs, Program Induction, and Deep Learning**\n\n  Josh Tenenbaum · Vikash K Mansinghka\n\n  Slides · Video · Code\n\n- **Differentially Private Machine Learning: Theory, Algorithms and Applications**\n\n  Kamalika Chaudhuri · Anand D Sarwate\n\n  [Slides](http:\u002F\u002Fwww.ece.rutgers.edu\u002F~asarwate\u002Fnips2017\u002FNIPS17_DPML_Tutorial.pdf) · Video · Code\n\n- **Geometric Deep Learning on Graphs and Manifolds**\n\n  Michael Bronstein · Joan Bruna · arthur szlam · Xavier Bresson · Yann LeCun\n\n  [Slides](http:\u002F\u002Fgeometricdeeplearning.com\u002Fslides\u002FNIPS-GDL.pdf) · [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=LvmjbXZyoP0) · Code\n  ​            \n## Workshops\n\n- ### [ML Systems Workshop @ NIPS 2017](http:\u002F\u002Flearningsys.org\u002Fnips17\u002Findex.html)\n\n  Aparna Lakshmiratan · Sarah Bird · Siddhartha Sen · Christopher Ré · Li Erran Li · Joseph Gonzalez · Daniel Crankshaw\n\n  - A distributed execution engine for emerging AI applications\n\n    Ion Stoica\n\n  - The Case for Learning Database Indexes    \n\n  - [Federated Multi-Task Learning](http:\u002F\u002Flearningsys.org\u002Fnips17\u002Fassets\u002Fslides\u002Fmocha-NIPS.pdf)\n\n    Virginia Smith\n\n  - [Accelerating Persistent Neural Networks at Datacenter Scale](http:\u002F\u002Flearningsys.org\u002Fnips17\u002Fassets\u002Fslides\u002Fbrainwave-nips17.pdf)\n\n    Daniel Lo\n\n  - [DLVM: A modern compiler framework for neural network DSLs](http:\u002F\u002Flearningsys.org\u002Fnips17\u002Fassets\u002Fslides\u002Fdlvm-nips17.pdf)\n\n    Richard Wei · Lane Schwartz · Vikram Adve\n\n  - [Machine Learning for Systems and Systems for Machine Learning](http:\u002F\u002Flearningsys.org\u002Fnips17\u002Fassets\u002Fslides\u002Fdean-nips17.pdf)\n\n    Jeff Dean\n\n  - [Creating an Open and Flexible ecosystem for AI models with ONNX](http:\u002F\u002Flearningsys.org\u002Fnips17\u002Fassets\u002Fslides\u002FONNX-workshop.pdf)\n\n    Sarah Bird · Dmytro Dzhulgakov \n\n  - [NSML: A Machine Learning Platform That Enables You to Focus on Your Models](http:\u002F\u002Flearningsys.org\u002Fnips17\u002Fassets\u002Fslides\u002Fnsml_slides.pdf)\n\n     Nako Sung\n\n  - [DAWNBench: An End-to-End Deep Learning Benchmark and Competition](http:\u002F\u002Flearningsys.org\u002Fnips17\u002Fassets\u002Fslides\u002Fdawn-nips17.pptx)\n\n    Cody Coleman\n\n- ### [Bayesian Deep Learning](http:\u002F\u002Fbayesiandeeplearning.org\u002F)\n\n  Yarin Gal · José Miguel Hernández-Lobato · Christos Louizos · Andrew G Wilson · Diederik P. (Durk) Kingma · Zoubin Ghahramani · Kevin P Murphy · Max Welling\n\n  - [Why Aren't You Using Probabilistic Programming?](http:\u002F\u002Fdustintran.com\u002Ftalks\u002FTran_Probabilistic_Programming.pdf)\n\n    Dustin Tran\n\n  - Automatic Model Selection in BNNs with Horseshoe Priors  \n\n    Finale Doshi\n\n  - Deep Bayes for Distributed Learning, Uncertainty Quantification and Compression\n\n    Max Welling \n\n  - Stochastic Gradient Descent as Approximate Bayesian Inference  \n\n    Matt Hoffman\n\n  - [Recent Advances in Autoregressive Generative Models](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F11CNWY5op_J5PvP02J9g8tciAom-MW9MZ\u002Fview)\n\n    Nal Kalchbrenner\n\n  - Deep Kernel Learning  \n\n    Russ Salakhutdinov\n\n  - Bayes by Backprop\n\n    Meire Fortunato\n\n  - How do the Deep Learning layers converge to the Information Bottleneck limit by Stochastic Gradient Descent?  \n\n    Naftali (Tali) Tishby \n\n- ### [Learning with Limited Labeled Data: Weak Supervision and Beyond](https:\u002F\u002Flld-workshop.github.io\u002F)\n\n  Isabelle Augenstein · Stephen Bach · Eugene Belilovsky · Matthew Blaschko · Christoph Lampert · Edouard Oyallon · Emmanouil Antonios Platanios · Alexander Ratner · Christopher Ré\n\n  - [Welcome Note](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Fopening.pdf)\n\n  - [Tales from fMRI: Learning from limited labeled data](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Fgael_varoquaux_lld.pdf)   \n\n    Gaël Varoquaux \n\n  - [Learning from Limited Labeled Data (But a Lot of Unlabeled Data)](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Ftom_mitchell_lld.pdf)\n\n    Tom Mitchell\n\n  - [Light Supervision of Structured Prediction Energy Networks](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Fandrew_mccallum_lld.pdf)\n\n    Andrew McCallum\n\n  - [Forcing Neural Link Predictors to Play by the Rules](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Fsebastian_riedel_lld.pdf)\n\n    Sebastian Riedel\n\n  - [Panel: Limited Labeled Data in Medical Imaging](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Fradiology_panel_lld.pdf)\n\n    Daniel Rubin · Matt Lungren · Ina Fiterau\n\n  - [Sample and Computationally Efficient Active Learning Algorithms](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Fnina_balcan_lld.pdf)\n\n    Nina Balcan\n\n  - [That Doesn't Make Sense! A Case Study in Actively Annotating Model Explanations](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Fsameer_singh_lld.pdf)\n\n     Sameer Singh\n\n  - [Overcoming Limited Data with GANs](http:\u002F\u002Fwww.iangoodfellow.com\u002Fslides\u002F2017-12-09-label.pdf)\n\n    Ian Goodfellow\n\n  - [What’s so Hard About Natural Language Understanding?](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Falan_ritter_lld.pdf)\n\n    Alan Ritter\n\n  - [Closing Remarks](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Fclosing.pdf)\n\n- ### [Advances in Approximate Bayesian Inference](http:\u002F\u002Fapproximateinference.org\u002F)\n\n  Francisco Ruiz · Stephan Mandt · Cheng Zhang · James McInerney · Dustin Tran · Tamara Broderick · Michalis Titsias · David Blei · Max Welling\n\n  - [Learning priors, likelihoods, or posteriors](http:\u002F\u002Fapproximateinference.org\u002F2017\u002Fschedule\u002FMurray2017.pdf)\n\n    Iain Murray\n\n  - Learning Implicit Generative Models Using Differentiable Graph Tests\n\n    Josip Djolonga \n\n  - [Gradient Estimators for Implicit Models)](http:\u002F\u002Fapproximateinference.org\u002F2017\u002Fschedule\u002FLi2017.pdf)\n\n    Yingzhen Li\n\n  - Variational Autoencoders for Recommendation\n\n    Dawen Liang\n\n  - [Approximate Inference in Industry: Two Applications at Amazon](http:\u002F\u002Fapproximateinference.org\u002F2017\u002Fschedule\u002FArchambeau2017.pdf)\n\n    Cedric Archambeau\n\n  - [Variational Inference based on Robust Divergences](http:\u002F\u002Fapproximateinference.org\u002F2017\u002Fschedule\u002FFutami2017.pdf)\n\n    Futoshi Futami\n\n  - [Adversarial Sequential Monte Carlo](http:\u002F\u002Fapproximateinference.org\u002F2017\u002Fschedule\u002FKempinska2017.pdf)\n\n    Kira Kempinska\n\n  - [Scalable Logit Gaussian Process Classification](http:\u002F\u002Fapproximateinference.org\u002F2017\u002Fschedule\u002FWenzel2017.pdf)\n\n    Florian Wenzel\n\n  - [Variational inference in deep Gaussian processes](http:\u002F\u002Fadamian.github.io\u002Ftalks\u002FDamianou_NIPS17.pdf)\n\n    Andreas Damianou\n\n  - [Taylor Residual Estimators via Automatic Differentiation](http:\u002F\u002Fapproximateinference.org\u002F2017\u002Fschedule\u002FMiller2017.pdf)\n\n    Andrew Miller\n\n  - [Differential privacy and Bayesian learning](http:\u002F\u002Fapproximateinference.org\u002F2017\u002Fschedule\u002FHonkela2017.pdf)\n    \n    Antti Honkela\n\n  - Frequentist Consistency of Variational Bayes\n    \n    Yixin Wang\n\n- ### [Deep Learning at Supercomputer Scale](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002F)\n\n  Erich Elsen · Danijar Hafner · Zak Stone · Brennan Saeta\n\n  - [Generalization Gap](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FNIPS2017_SharpMinima.pdf)\n\n    Nitish Keskar\n\n  - [Closing the Generalization Gap](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FTrainLongerPresentation.pdf)\n\n    Itay Hubara · Elad Hoffer \n\n  - [Don’t Decay the Learning Rate, Increase the Batchsize)](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FDLSC_talk.pdf)\n\n    Sam Smith\n\n  - [ImageNet in 1 Hour](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FNIPS-workshop-priya-final.pptx)\n\n    Priya Goyal\n\n  - [ImageNet is the new MNIST](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FImageNetNewMNIST.pdf)\n  \n    Chris Ying  \n\n  - [KFAC and Natural Gradients](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FK-FAC.pdf)\n\n    Matthew Johnson & Daniel Duckworth\n\n  - [Neumann Optimizer](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FNeumannOptimizerFinal.pdf)\n\n    Shankar Krishnan\n\n  - [Evolutionary Strategies](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FSalimans_ES.pdf)\n\n    Tim Salimans\n\n  - [Learning Device Placement](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FDevicePlacementWithDeepRL.pdf)\n\n    Azalia Mirhoseini\n\n  - [Scaling and Sparsity](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002Fscaling-is-predictable.pdf)\n\n    Gregory Diamos\n\n  - [Small World Network Architectures](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FSmallWorldNetworkArchitectures.pdf)\n\n    Scott Gray\n\n  - [Scalable RL & AlphaGo](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FDeepReinforcementLearningatScale.pdf)\n    \n    Timothy Lillicrap\n\n  - [Scaling Deep Learning to 15 PetaFlops](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FThorstenLargeScaleDeepLearning.pdf)\n    \n    Thorsten Kurth\n\n  - [Scalable Silicon Compute](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FSimonKnowlesGraphCore.pdf)\n\n    Simon Knowles\n\n  - [Practical Scaling Techniques](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002Fpractical_scaling_techniques_v6.pdf)\n\n    Ujval Kapasi\n\n  - Designing for Supercompute-Scale Deep Learning\n\n    Michael James\n\n- ### [Machine Learning Challenges as a Research Tool](http:\u002F\u002Fciml.chalearn.org\u002Fciml2017)\n\n  Isabelle Guyon · Evelyne Viegas · Sergio Escalera · Jacob D Abernethy\n\n  - [RAMP platform](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F12CwwCtCLDkp92MurS1aEDXV8YcXJNGJH\u002Fview)\n\n    Balázs Kégl\n\n  - [Automatic evaluation of chatbots](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OjU0YjZiMmM4ZDVhMTA1ZjA)\n\n    Varvara Logacheva (speaker) · Mikhail Burtsev\n\n  - [TrackML](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1ifBM6PCpIUFSnI_TBiBB5YeCx6Kguj1Q\u002Fview)\n\n    David Rousseau\n\n  - [Data science bowl](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OjYxMzI1ZDY4ZWE4Yzc4NzQ)\n\n    Drew Farris\n\n  - [CrowdAI](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OmRlOWMwYmI5MGQ1NGNh)\n  \n    Mohanty Sharada\n\n  - Kaggle platform\n\n    Ben Hamner\n\n  - [Project Malmo, Minecraft](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OmZiYTAyYmY3NjhiOGQ0OA)\n\n    Katja Hofmann\n\n  - [Project Alloy](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OjdiNzIwMWMwMWY5ZjlhMjY)\n\n    Laura Seaman\n\n  - [Education and public service](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OjQ1NWM4ODNkNjQzMjgxNTQ)\n\n    Jonathan C. Stroud\n\n  - [AutoDL (Google challenge)](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OjJiMTU0MTlmZjY5NGZiOGI)\n\n    Olivier Bousquet\n\n  - [Scoring rule markets](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OjNlNWRlMDdjYTgwNDFkZTA)\n\n    Rafael Frongillo · Bo Waggoner\n\n  - [ENCODE-DREAM challenge](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OjZjYzdiZGZkMWI1MjliNzk)\n    \n    Akshay Balsubramani\n\n  - [Codalab platform](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OjVmY2U0NTk3M2RhYTRlZGY)\n    \n    Evelyne Viegas · Sergio Escalera · Isabelle Guyon\n\n- ### [Bayesian optimization for science and engineering](https:\u002F\u002Fbayesopt.github.io\u002Findex.html)\n\n  Ruben Martinez-Cantin · José Miguel Hernández-Lobato · Javier Gonzalez\n\n  - Towards Safe Bayesian Optimization\n\n    Andreas Krause \n\n  - Learning to learn without gradient descent by gradient descent\n\n    Yutian Chen\n\n  - [Scaling Bayesian Optimization in High Dimensions](https:\u002F\u002Fbayesopt.github.io\u002Fslides\u002F2017\u002Fbayesopt_2017_jegelka.pdf)\n\n    Stefanie Jegelka\n\n  - [Neuroadaptive Bayesian Optimization - Implications for Cognitive Sciences](https:\u002F\u002Fbayesopt.github.io\u002Fslides\u002F2017\u002FLorenz_NIPS_Workshop_2017.pdf)\n\n    Romy Lorenz\n\n  - [Knowledge Gradient Methods for Bayesian Optimization](https:\u002F\u002Fbayesopt.github.io\u002Fslides\u002F2017\u002FBayesOptWorkshopFrazier.pdf)\n  \n    Peter Frazier \n\n  - [Quantifying and reducing uncertainties on sets under Gaussian Process priors](https:\u002F\u002Fbayesopt.github.io\u002Fslides\u002F2017\u002FNIPS_BOws_Ginsbourger_09_12_2017.pdf)\n\n    David Ginsbourger\n\n- ### [(Almost) 50 shades of Bayesian Learning: PAC-Bayesian trends and insights](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002F50shadesbayesian.html)\n\n  Benjamin Guedj · Pascal Germain · Francis Bach\n\n  - Dimension-free PAC-Bayesian Bounds - [Part 1](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Fcatoni_nips2017_1.pdf) [Part 2](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Fcatoni_nips2017_2.pdf)\n\n    Olivier Catoni\n\n  - [A Tight Excess Risk Bound via a Unified PAC-Bayesian-Rademacher-Shtarkov-MDL Complexity](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Fgrunwald_nips2017.pdf)\n\n    Peter Grünwald\n\n  - [A Tutorial on PAC-Bayesian Theory](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Flaviolette_nips2017.pdf)\n\n    François Laviolette\n\n  - [Some recent advances on Approximate Bayesian Computation techniques](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Fmarin_nips2017.pdf)\n\n    Jean-Michel Marin\n\n  - [A PAC-Bayesian Approach to Spectrally-Normalized Margin Bounds for Neural Networks](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Fneyshabur_nips2017.pdf)\n  \n    Behnam Neyshabur\n\n  - [Deep Neural Networks: From Flat Minima to Numerically Nonvacuous Generalization Bounds via PAC-Bayes](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Froy_nips2017.pdf)\n\n    Dan Roy\n \n - [A Strongly Quasiconvex PAC-Bayesian Bound](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Fseldin_nips2017.pdf)\n  \n    Yevgeny Seldin\n\n  - [Distribution Dependent Priors for Stable Learning](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Fshawe-taylor_nips2017.pdf)\n\n    John Shawe-Taylor\n\n## Symposiums\n\n- ### [Interpretable Machine Learning](http:\u002F\u002Finterpretable.ml\u002F)\n    \n  Andrew G Wilson · Jason Yosinski · Patrice Simard · Rich Caruana · William Herlands\n\n  - The role of causality for interpretability.\n    \n    Bernhard Scholkopf \n\n    [Slides](http:\u002F\u002Fs.interpretable.ml\u002Fnips_interpretable_ml_2017_Bernhard_Schoelkopf.pdf) · [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=9C3RvDs_hHw)\n\n  - Interpretable Discovery in Large Image Data Sets\n    \n    Kiri Wagstaff\n\n    [Slides](http:\u002F\u002Fs.interpretable.ml\u002Fnips_interpretable_ml_2017_kiri_wagstaff.pdf) · [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=_K2wVfi_KDM)\n\n  - The (hidden) Cost of Calibration.\n    \n    Bernhard Scholkopf \n\n    [Slides](http:\u002F\u002Fs.interpretable.ml\u002Fnips_interpretable_ml_2017_Kilian_Weinberger.pdf) · [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fDtQQ9GlSJY)\n\n  - Panel Discussion\n    \n    Hanna Wallach, Kiri Wagstaff, Suchi Saria, Bolei Zhou, and Zack Lipton. Moderated by Rich Caruana.\n\n    [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=kruwzfvKt3w)\n\n  - Interpretability for AI safety\n    \n    Victoria Krakovna\n\n    [Slides](http:\u002F\u002Fs.interpretable.ml\u002Fnips_interpretable_ml_2017_victoria_Krakovna.pdf) · [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=3HzIutdlpho)\n\n  - Manipulating and Measuring Model Interpretability.\n    \n    Jenn Wortman Vaughan\n\n    [Slides](http:\u002F\u002Fs.interpretable.ml\u002Fnips_interpretable_ml_2017_jenn_wortman_vaughan.pdf) · [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=8ZoL-cKRf2o)\n\n  - Debugging the Machine Learning Pipeline.\n    \n    Jerry Zhu\n\n    [Slides](http:\u002F\u002Fs.interpretable.ml\u002Fnips_interpretable_ml_2017_jerry_zhu.pdf) · [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=XO2281l_JVw)\n\n  - Panel Debate and Followup Discussion\n    \n    Yann LeCun, Kilian Weinberger, Patrice Simard, and Rich Caruana.\n\n    [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=2hW05ZfsUUo)\n\n- ### [Deep Reinforcement Learning](https:\u002F\u002Fsites.google.com\u002Fview\u002Fdeeprl-symposium-nips2017\u002Fhome)\n    \n  Pieter Abbeel · Yan Duan · David Silver · Satinder Singh · Junhyuk Oh · Rein Houthooft\n\n  - Mastering Games with Deep Reinforcement Learning\n    \n    David Silver\n\n    [Video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=A3ekFcZ3KNw)\n\n  - Reproducibility in Deep Reinforcement Learning and Beyond\n    \n    Joelle Pineau\n\n    Slides · Video\n\n  - Neural Map: Structured Memory for Deep RL\n    \n    Ruslan Salakhutdinov\n\n    [Slides](http:\u002F\u002Fwww.cs.cmu.edu\u002F~rsalakhu\u002FNIPS2017_StructureMemoryForDeepRL.pdf)\n\n  - Deep Exploration Via Randomized Value Functions\n    \n    Ben Van Roy\n\n    Slides · Video\n  \n  - Artificial Intelligence Goes All-In\n    \n    Michael Bowling    \n\n- ### [Kinds of intelligence: types, tests and meeting the needs of society](http:\u002F\u002Fwww.kindsofintelligence.org\u002F)\n    \n  José Hernández-Orallo · Zoubin Ghahramani · Tomaso A Poggio · Adrian Weller · Matthew Crosby\n\n  - Opening remarks\n    \n    [Slides](https:\u002F\u002Fintelligence.webs.upv.es\u002Fslides\u002FNIPS-symposium-opening.pdf)\n\n  - Why the mind evolved: the evolution of navigation in real landscapes\n    \n    Lucia Jacob\n\n    Slides · Video\n\n  - The distinctive intelligence of young children: Insights for AI from cognitive development\n    \n    Alison Gopnik\n\n    [Slides](https:\u002F\u002Fintelligence.webs.upv.es\u002Fslides\u002FGopnik-NIPS.pptx)\n\n  - Learning from first principles\n    \n    Demis Hassabis\n\n    Slides · Video\n  \n  - Types of intelligence: why human-like AI is important\n    \n    Josh Tenenbaum   \n\n  - The road to artificial general intelligence\n    \n    Gary Marcus\n\n    [Slides](https:\u002F\u002Fintelligence.webs.upv.es\u002Fslides\u002FGopnik-NIPS.pptx)\n\n  - Video games and the road to collaborative AI\n    \n    Katja Hofmann\n\n    [Slides](https:\u002F\u002Fintelligence.webs.upv.es\u002Fslides\u002F2017-12-07-Katja-Hofmann-symposium-kinds-of-intelligence.pdf) · Video\n  \n  - Fair questions\n    \n    Cynthia Dwork\n\n    [Slides](https:\u002F\u002Fintelligence.webs.upv.es\u002Fslides\u002FNIPS2017-Dwork.pdf)\n \n  - States, corporations, thinking machines: artificial agency and artificial intelligence\n    \n    David Runciman\n\n    Slides · Video\n  \n  - Closing remarks\n    \n    [Slides](https:\u002F\u002Fintelligence.webs.upv.es\u002Fslides\u002FNIPS-symposium-closing.pdf)  \n\n## WiML\n\n- **Bayesian machine learning: Quantifying uncertainty and robustness at scale**\n\n  Tamara​ ​Broderick​\n\n  Slides · Video · Code\n\n- **Towards Communication-Centric Multi-Agent Deep Reinforcement Learning for Guarding a Territory**\n\n  Aishwarya​ ​Unnikrishnan\n\n  Slides · Video · Code\n\n- **Graph convolutional networks can encode three-dimensional genome architecture in deep learning models for genomics**\n\n  Peyton​ ​Greenside​\n\n  Slides · Video · Code\n\n- **Machine Learning for Social Science**\n\n  Hannah​ ​Wallach​\n\n  Slides · Video · Code\n\n- **Fairness Aware Recommendations**\n\n  Palak​ ​Agarwal​\n\n  Slides · Video · Code\n\n- **Reinforcement Learning with a Corrupted Reward Channel**\n\n  Victoria​ ​Krakivna​\n\n  Slides · Video · Code\n\n- **Improving health-care: challenges and opportunities for reinforcement learning**\n\n  Joelle​ ​Pineau​\n\n  Slides · Video · Code\n\n- **Harnessing Adversarial Attacks on Deep Reinforement Learning for Improving Robustness**\n\n  Zhenyi​ ​Tang​\n\n  Slides · Video · Code\n\n- **Time-Critical Machine Learning**\n\n  Nina​ ​Mishra​\n\n  Slides · Video · Code  \n\n- **A General Framework for Evaluating Callout Mechanisms in Repeated Auctions**\n\n  Hoda​ ​Heidari​\n\n  Slides · Video · Code\n\n- **Engaging Experts: A Dirichlet Process Approach to Divergent Elicited Priors in Social Science**\n\n  Sarah​ ​Bouchat​\n\n  Slides · Video · Code\n\n- **Representation Learning in Large Attributed Graphs**\n\n  Nesreen​ ​K​ ​Ahmed​\n\n  [Slides](https:\u002F\u002Fwww.slideshare.net\u002FNesreenAhmed2\u002Frepresentation-learning-in-large-attributed-graphs) · Video · Code      \n","# NIPS 2017\n\n\u003Cp align=\"center\">\u003Cimg width=\"50%\" src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhindupuravinash_nips2017_readme_51f52dabc008.jpg\" \u002F>\u003C\u002Fp>\n\n今年在加州长滩会议中心举行的神经信息处理系统大会（NIPS）2017，规模空前！以下是所有特邀报告、教程和研讨会的资源及幻灯片列表。\n\n欢迎大家贡献内容。您可以通过提交拉取请求添加链接，或者创建一个议题，告诉我有哪些遗漏的内容，或发起讨论。如果您认识演讲者，请鼓励他们将幻灯片上传到网上！\n\n请查看 [Deep Hunt](https:\u002F\u002Fwww.deephunt.in)——为本仓库专门制作的精选每月人工智能简报，以[博客文章](https:\u002F\u002Fdeephunt.in\u002Fnips-2017-e580ebc9c7b2)形式呈现，并在[Twitter](https:\u002F\u002Fwww.twitter.com\u002Fhindupuravinash)上关注我。\n\n## 目录\n\n- [特邀报告](#invited-talks)\n\n- [教程](#tutorials)\n\n- [研讨会](#workshops)\n\n- [WiML](#wiml)\n\n\n## 特邀报告\n\n- **驱动未来一百年**\n\n  John Platt\n\n  幻灯片 · [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=HL60wgrT67k) · 代码\n\n- **为什么AI能让人类基因组重编程成为可能**\n\n  Brendan J Frey\n\n  [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=QJLQBSQJEus)\n\n- **偏见的困境**\n\n  Kate Crawford\n\n  [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fMym_BKWQzk)\n\n- **结构的不合理有效性**\n\n  Lise Getoor\n\n  幻灯片 · [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=t4k5LKCpboc)\n\n- **机器人领域的深度学习**\n\n  Pieter Abbeel\n\n  [幻灯片](https:\u002F\u002Fwww.dropbox.com\u002Fs\u002Ffdw7q8mx3x4wr0c\u002F2017_12_xx_NIPS-keynote-final.pdf) · [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=po9z_tMuEwE) · 代码\n\n- **学习状态表示**\n\n  Yael Niv\n  \n  [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=FhOwFDGm0d4)\n\n- **关于贝叶斯深度学习与深度贝叶斯学习**\n\n  Yee Whye Teh\n\n  [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=9saauSBgmcQ)\n\n## 教程\n\n- **深度学习：实践与趋势**\n\n  Nando de Freitas · Scott Reed · Oriol Vinyals\n\n  [幻灯片](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1SuwiICLERd7SfYo3FiqNG0tCEBUjKcT7\u002Fview) · [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=YJnddoa8sHk) · 代码\n\n- **与人一起进行强化学习**\n\n  Emma Brunskill\n\n  幻灯片 · [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=TqT9nIx27Eg) · 代码\n\n- **最优传输入门**\n\n  Marco Cuturi · Justin M Solomon\n\n  [幻灯片](https:\u002F\u002Fwww.dropbox.com\u002Fs\u002F55tb2cf3zipl6xu\u002FaprimeronOT.pdf) · 视频 · 代码\n\n- **基于高斯过程的深度概率建模**\n\n  Neil D Lawrence\n\n  [幻灯片](http:\u002F\u002Finverseprobability.com\u002Ftalks\u002Flawrence-nips17\u002Fdeep-probabilistic-modelling-with-gaussian-processes.html) · [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=RAiPlfohjJo) · 代码    \n\n- **机器学习中的公平性**\n\n  Solon Barocas · Moritz Hardt\n\n  [幻灯片](http:\u002F\u002Fmrtz.org\u002Fnips17\u002F#\u002F) · 视频 · 代码\n\n- **统计关系人工智能：逻辑、概率与计算**\n\n  Luc De Raedt · David Poole · Kristian Kersting · Sriraam Natarajan\n\n  幻灯片 · 视频 · 代码\n\n- **利用概率程序、程序归纳与深度学习进行智能的工程化与逆向工程**\n\n  Josh Tenenbaum · Vikash K Mansinghka\n\n  幻灯片 · 视频 · 代码\n\n- **差分隐私机器学习：理论、算法与应用**\n\n  Kamalika Chaudhuri · Anand D Sarwate\n\n  [幻灯片](http:\u002F\u002Fwww.ece.rutgers.edu\u002F~asarwate\u002Fnips2017\u002FNIPS17_DPML_Tutorial.pdf) · 视频 · 代码\n\n- **图与流形上的几何深度学习**\n\n  Michael Bronstein · Joan Bruna · arthur szlam · Xavier Bresson · Yann LeCun\n\n  [幻灯片](http:\u002F\u002Fgeometricdeeplearning.com\u002Fslides\u002FNIPS-GDL.pdf) · [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=LvmjbXZyoP0) · 代码\n  ​            \n## 研讨会\n\n- ### [ML系统研讨会 @ NIPS 2017](http:\u002F\u002Flearningsys.org\u002Fnips17\u002Findex.html)\n\n  Aparna Lakshmiratan · Sarah Bird · Siddhartha Sen · Christopher Ré · Li Erran Li · Joseph Gonzalez · Daniel Crankshaw\n\n  - 面向新兴AI应用的分布式执行引擎\n\n    Ion Stoica\n\n  - 学习数据库索引的理由    \n\n  - [联邦多任务学习](http:\u002F\u002Flearningsys.org\u002Fnips17\u002Fassets\u002Fslides\u002Fmocha-NIPS.pdf)\n\n    Virginia Smith\n\n  - [在数据中心规模下加速持久化神经网络](http:\u002F\u002Flearningsys.org\u002Fnips17\u002Fassets\u002Fslides\u002Fbrainwave-nips17.pdf)\n\n    Daniel Lo\n\n  - [DLVM：面向神经网络DSL的现代编译器框架](http:\u002F\u002Flearningsys.org\u002Fnips17\u002Fassets\u002Fslides\u002Fdlvm-nips17.pdf)\n\n    Richard Wei · Lane Schwartz · Vikram Adve\n\n  - [用于系统的机器学习与用于机器学习的系统](http:\u002F\u002Flearningsys.org\u002Fnips17\u002Fassets\u002Fslides\u002Fdean-nips17.pdf)\n\n    Jeff Dean\n\n  - [使用ONNX创建开放且灵活的AI模型生态系统](http:\u002F\u002Flearningsys.org\u002Fnips17\u002Fassets\u002Fslides\u002FONNX-workshop.pdf)\n\n    Sarah Bird · Dmytro Dzhulgakov \n\n  - [NSML：一款让你专注于模型的机器学习平台](http:\u002F\u002Flearningsys.org\u002Fnips17\u002Fassets\u002Fslides\u002Fnsml_slides.pdf)\n\n     Nako Sung\n\n  - [DAWNBench：端到端深度学习基准测试与竞赛](http:\u002F\u002Flearningsys.org\u002Fnips17\u002Fassets\u002Fslides\u002Fdawn-nips17.pptx)\n\n    Cody Coleman\n\n- ### [贝叶斯深度学习](http:\u002F\u002Fbayesiandeeplearning.org\u002F)\n\n  Yarin Gal · José Miguel Hernández-Lobato · Christos Louizos · Andrew G Wilson · Diederik P. (Durk) Kingma · Zoubin Ghahramani · Kevin P Murphy · Max Welling\n\n  - [你为什么还不使用概率编程？](http:\u002F\u002Fdustintran.com\u002Ftalks\u002FTran_Probabilistic_Programming.pdf)\n\n    Dustin Tran\n\n  - 使用马蹄形先验在BNN中自动选择模型  \n\n    Finale Doshi\n\n  - 深度贝叶斯用于分布式学习、不确定性量化和压缩\n\n    Max Welling \n\n  - 随机梯度下降作为近似贝叶斯推断  \n\n    Matt Hoffman\n\n  - [自回归生成模型的最新进展](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F11CNWY5op_J5PvP02J9g8tciAom-MW9MZ\u002Fview)\n\n    Nal Kalchbrenner\n\n  - 深度核学习  \n\n    Russ Salakhutdinov\n\n  - 反向传播中的贝叶斯方法\n\n    Meire Fortunato\n\n  - 深度学习各层如何通过随机梯度下降收敛到信息瓶颈极限？  \n\n    Naftali (Tali) Tishby \n\n- ### [有限标注数据下的学习：弱监督及其他](https:\u002F\u002Flld-workshop.github.io\u002F)\n\n  Isabelle Augenstein · Stephen Bach · Eugene Belilovsky · Matthew Blaschko · Christoph Lampert · Edouard Oyallon · Emmanouil Antonios Platanios · Alexander Ratner · Christopher Ré\n\n  - [欢迎致辞](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Fopening.pdf)\n\n  - [来自fMRI的故事：从有限标注数据中学习](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Fgael_varoquaux_lld.pdf)   \n\n    Gaël Varoquaux \n\n  - [从有限标注数据（但大量未标注数据）中学习](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Ftom_mitchell_lld.pdf)\n\n    Tom Mitchell\n\n  - [对结构化预测能量网络进行轻量级监督](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Fandrew_mccallum_lld.pdf)\n\n    Andrew McCallum\n\n  - [强制神经链接预测器按规则行事](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Fsebastian_riedel_lld.pdf)\n\n塞巴斯蒂安·里德尔\n\n  - [专题讨论：医学影像中的有限标注数据](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Fradiology_panel_lld.pdf)\n\n    丹尼尔·鲁宾 · 马特·伦格伦 · 伊娜·菲特劳\n\n  - [样本高效且计算高效的主动学习算法](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Fnina_balcan_lld.pdf)\n\n    妮娜·巴尔坎\n\n  - [这说不通！主动标注模型解释的案例研究](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Fsameer_singh_lld.pdf)\n\n     萨米尔·辛格\n\n  - [利用生成对抗网络克服数据不足的问题](http:\u002F\u002Fwww.iangoodfellow.com\u002Fslides\u002F2017-12-09-label.pdf)\n\n    伊恩·古德费洛\n\n  - [自然语言理解究竟难在哪里？](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Falan_ritter_lld.pdf)\n\n    艾伦·里特\n\n  - [闭幕致辞](https:\u002F\u002Flld-workshop.github.io\u002Fslides\u002Fclosing.pdf)\n\n- ### [近似贝叶斯推断的进展](http:\u002F\u002Fapproximateinference.org\u002F)\n\n  弗朗西斯科·鲁伊斯 · 斯特凡·曼特 · 程章 · 詹姆斯·麦金尼 · 达斯汀·特兰 · 塔玛拉·布罗德里克 · 米哈利斯·蒂西亚斯 · 大卫·布利 · 马克斯·韦林\n\n  - [学习先验、似然或后验分布](http:\u002F\u002Fapproximateinference.org\u002F2017\u002Fschedule\u002FMurray2017.pdf)\n\n    伊恩·默里\n\n  - 使用可微分图检验学习隐式生成模型\n\n    乔西普·乔隆加 \n\n  - [隐式模型的梯度估计量](http:\u002F\u002Fapproximateinference.org\u002F2017\u002Fschedule\u002FLi2017.pdf)\n\n    李英振\n\n  - 用于推荐的变分自编码器\n\n    梁大文\n\n  - [工业界的近似推断：亚马逊的两个应用](http:\u002F\u002Fapproximateinference.org\u002F2017\u002Fschedule\u002FArchambeau2017.pdf)\n\n    塞德里克·阿沙姆博\n\n  - [基于稳健散度的变分推断](http:\u002F\u002Fapproximateinference.org\u002F2017\u002Fschedule\u002FFutami2017.pdf)\n\n    古谷太史\n\n  - [对抗性序列蒙特卡洛](http:\u002F\u002Fapproximateinference.org\u002F2017\u002Fschedule\u002FKempinska2017.pdf)\n\n    基拉·肯平斯卡\n\n  - [可扩展的逻辑高斯过程分类](http:\u002F\u002Fapproximateinference.org\u002F2017\u002Fschedule\u002FWenzel2017.pdf)\n\n    弗洛里安·文策尔\n\n  - [深度高斯过程中的变分推断](http:\u002F\u002Fadamian.github.io\u002Ftalks\u002FDamianou_NIPS17.pdf)\n\n    安德烈亚斯·达米亚努\n\n  - [通过自动微分实现泰勒残差估计量](http:\u002F\u002Fapproximateinference.org\u002F2017\u002Fschedule\u002FMiller2017.pdf)\n\n    安德鲁·米勒\n\n  - [差分隐私与贝叶斯学习](http:\u002F\u002Fapproximateinference.org\u002F2017\u002Fschedule\u002FHonkela2017.pdf)\n    \n    安蒂·洪凯拉\n\n  - 变分贝叶斯的频率学一致性\n    \n    王怡欣\n\n- ### [超级计算机规模下的深度学习](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002F)\n\n  埃里希·埃尔森 · 达尼雅尔·哈夫纳 · 扎克·斯通 · 布伦南·萨埃塔\n\n  - [泛化差距](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FNIPS2017_SharpMinima.pdf)\n\n    尼提什·凯斯卡尔\n\n  - [缩小泛化差距](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FTrainLongerPresentation.pdf)\n\n    伊泰·胡巴拉 · 伊莱德·霍弗\n\n  - [不要衰减学习率，增大批次大小](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FDLSC_talk.pdf)\n\n    萨姆·史密斯\n\n  - [一小时内完成ImageNet训练](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FNIPS-workshop-priya-final.pptx)\n\n    普里亚·戈亚尔\n\n  - [ImageNet就是新的MNIST](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FImageNetNewMNIST.pdf)\n  \n    克里斯·英\n\n  - [KFAC与自然梯度](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FK-FAC.pdf)\n\n    马修·约翰逊 & 丹尼尔·达克沃思\n\n  - [诺依曼优化器](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FNeumannOptimizerFinal.pdf)\n\n    尚卡尔·克里希南\n\n  - [进化策略](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FSalimans_ES.pdf)\n\n    蒂姆·萨利曼斯\n\n  - [学习设备布局](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FDevicePlacementWithDeepRL.pdf)\n\n    阿扎莉娅·米尔霍赛尼\n\n  - [扩展与稀疏化](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002Fscaling-is-predictable.pdf)\n\n    格雷戈里·迪亚莫斯\n\n  - [小世界网络架构](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FSmallWorldNetworkArchitectures.pdf)\n\n    斯科特·格雷\n\n  - [可扩展的强化学习与AlphaGo](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FDeepReinforcementLearningatScale.pdf)\n    \n    蒂莫西·利利克拉普\n\n  - [将深度学习扩展到15 PetaFlops](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FThorstenLargeScaleDeepLearning.pdf)\n    \n    托尔斯滕·库尔特\n\n  - [可扩展的硅基计算](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002FSimonKnowlesGraphCore.pdf)\n\n    西蒙·诺尔斯\n\n  - [实用的扩展技术](https:\u002F\u002Fsupercomputersfordl2017.github.io\u002FPresentations\u002Fpractical_scaling_techniques_v6.pdf)\n\n    乌杰瓦尔·卡帕西\n\n  - 为超级计算机规模的深度学习进行设计\n\n    迈克尔·詹姆斯\n\n- ### [机器学习挑战作为研究工具](http:\u002F\u002Fciml.chalearn.org\u002Fciml2017)\n\n  伊莎贝尔·居永 · 埃维琳·维加斯 · 塞尔吉奥·埃斯卡莱拉 · 雅各布·D·艾伯内西\n\n  - [RAMP平台](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F12CwwCtCLDkp92MurS1aEDXV8YcXJNGJH\u002Fview)\n\n    巴拉兹·凯格尔\n\n  - [聊天机器人的自动评估](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OjU0YjZiMmM4ZDVhMTA1ZjA)\n\n    瓦尔瓦拉·洛加切娃（演讲者）· 米哈伊尔·布尔采夫\n\n  - [TrackML](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1ifBM6PCpIUFSnI_TBiBB5YeCx6Kguj1Q\u002Fview)\n\n    大卫·鲁索\n\n  - [数据科学竞赛](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OmYzMTI1ZDY4ZWE4Yzc4NzQ)\n\n    德鲁·法里斯\n\n  - [CrowdAI](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OmRlOWMwYmI5MGQ1NGNh)\n  \n    莫汉蒂·沙拉达\n\n  - Kaggle平台\n\n    本·哈姆纳\n\n  - [Project Malmo, Minecraft](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OmZiYTAyYmY3NjhiOGQ0OA)\n\n    卡佳·霍夫曼\n\n  - [Project Alloy](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OmZjYzIwMWMwMWY5ZjlhMjY)\n\n    劳拉·西曼\n\n  - [教育与公共服务](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OmQ1NWM4ODNkNjQzMjgxNTQ)\n\n    乔纳森·C·斯特劳德\n\n  - [AutoDL（谷歌挑战）](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OmJiMTU0MTlmZjY9NGZiOGI)\n\n    奥利维尔·布斯凯\n\n  - [评分规则市场](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OmNmNWdlMDdjYTgwNDFkZEA)\n\n    拉斐尔·弗龙吉洛 · 博·瓦戈纳\n\n  - [ENCODE-DREAM挑战](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OmZjYzdiZGZkMWI1MjliNzk)\n    \n    阿克谢·巴尔苏布拉马尼\n\n- [Codalab 平台](https:\u002F\u002Fdocs.google.com\u002Fa\u002Fchalearn.org\u002Fviewer?a=v&pid=sites&srcid=Y2hhbGVhcm4ub3JnfHdvcmtzaG9wfGd4OjVmY2U0NTk3M2RhYTRlZGY)\n    \n    埃韦琳·维加斯 · 塞尔吉奥·埃斯卡莱拉 · 伊莎贝尔·居永\n\n- ### [面向科学与工程的贝叶斯优化](https:\u002F\u002Fbayesopt.github.io\u002Findex.html)\n\n  鲁本·马丁内斯-坎廷 · 何塞·米格尔·埃尔南德斯-洛巴托 · 哈维尔·冈萨雷斯\n\n  - 向安全的贝叶斯优化迈进\n\n    安德烈亚斯·克劳斯 \n\n  - 通过梯度下降学习如何不使用梯度下降进行学习\n\n    陈宇田\n\n  - [高维空间中贝叶斯优化的扩展](https:\u002F\u002Fbayesopt.github.io\u002Fslides\u002F2017\u002Fbayesopt_2017_jegelka.pdf)\n\n    斯特凡妮·耶格尔卡\n\n  - [神经适应性贝叶斯优化——对认知科学的影响](https:\u002F\u002Fbayesopt.github.io\u002Fslides\u002F2017\u002FLorenz_NIPS_Workshop_2017.pdf)\n\n    罗米·洛伦茨\n\n  - [贝叶斯优化中的知识梯度方法](https:\u002F\u002Fbayesopt.github.io\u002Fslides\u002F2017\u002FBayesOptWorkshopFrazier.pdf)\n  \n    彼得·弗雷泽 \n\n  - [在高斯过程先验下集合不确定性的量化与降低](https:\u002F\u002Fbayesopt.github.io\u002Fslides\u002F2017\u002FNIPS_BOws_Ginsbourger_09_12_2017.pdf)\n\n    大卫·金斯堡\n\n- ### [(几乎) 贝叶斯学习的五十 Shades：PAC-贝叶斯趋势与洞见](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002F50shadesbayesian.html)\n\n  本杰明·盖德日 · 帕斯卡尔·热尔曼 · 弗朗西斯·巴赫\n\n  - 无维度 PAC-贝叶斯界——[第一部分](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Fcatoni_nips2017_1.pdf) [第二部分](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Fcatoni_nips2017_2.pdf)\n\n    奥利维埃·卡托尼\n\n  - [通过统一的 PAC-贝叶斯-拉德马赫-什塔尔科夫-MDL 复杂度获得紧致的超额风险界](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Fgrunwald_nips2017.pdf)\n\n    彼得·格吕恩瓦尔德\n\n  - [PAC-贝叶斯理论教程](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Flaviolette_nips2017.pdf)\n\n    弗朗索瓦·拉维奥莱特\n\n  - [近来关于近似贝叶斯计算技术的一些进展](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Fmarin_nips2017.pdf)\n\n    让-米歇尔·马兰\n\n  - [一种基于 PAC-贝叶斯的谱归一化间隔界应用于神经网络](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Fneyshabur_nips2017.pdf)\n  \n    贝纳姆·内沙布尔\n\n  - [深度神经网络：从平坦极小值到通过 PAC-贝叶斯获得数值非空泛化界](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Froy_nips2017.pdf)\n\n    丹·罗伊\n \n - [一个强拟凸的 PAC-贝叶斯界](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Fseldin_nips2017.pdf)\n  \n    叶夫根尼·谢尔丁\n\n  - [用于稳定学习的分布相关先验](https:\u002F\u002Fbguedj.github.io\u002Fnips2017\u002Fpdf\u002Fshawe-taylor_nips2017.pdf)\n\n    约翰·肖-泰勒\n\n\n\n## 学术研讨会\n\n- ### [可解释的机器学习](http:\u002F\u002Finterpretable.ml\u002F)\n    \n  安德鲁·威尔逊 · 杰森·约辛斯基 · 帕特里斯·西马尔 · 里奇·卡鲁阿纳 · 威廉·赫兰兹\n\n  - 因果关系在可解释性中的作用。\n    \n    伯恩哈德·朔尔科普夫 \n\n    [幻灯片](http:\u002F\u002Fs.interpretable.ml\u002Fnips_interpretable_ml_2017_Bernhard_Schoelkopf.pdf) · [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=9C3RvDs_hHw)\n\n  - 大型图像数据集中的可解释发现\n    \n    基里·瓦格斯塔夫\n\n    [幻灯片](http:\u002F\u002Fs.interpretable.ml\u002Fnips_interpretable_ml_2017_kiri_wagstaff.pdf) · [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=_K2wVfi_KDM)\n\n  - 校准的（隐藏）代价。\n    \n    伯恩哈德·朔尔科普夫 \n\n    [幻灯片](http:\u002F\u002Fs.interpretable.ml\u002Fnips_interpretable_ml_2017_Kilian_Weinberger.pdf) · [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=fDtQQ9GlSJY)\n\n  - 圆桌讨论\n    \n    汉娜·沃拉赫、基里·瓦格斯塔夫、苏奇·萨里亚、周博磊和扎克·利普顿。由里奇·卡鲁阿纳主持。\n\n    [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=kruwzfvKt3w)\n\n  - 人工智能安全中的可解释性\n    \n    维多利亚·克拉科夫娜\n\n    [幻灯片](http:\u002F\u002Fs.interpretable.ml\u002Fnips_interpretable_ml_2017_victoria_Krakovna.pdf) · [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=3HzIutdlpho)\n\n  - 操控与衡量模型可解释性。\n    \n    珍·沃特曼·沃恩\n\n    [幻灯片](http:\u002F\u002Fs.interpretable.ml\u002Fnips_interpretable_ml_2017_jenn_wortman_vaughan.pdf) · [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=8ZoL-cKRf2o)\n\n  - 调试机器学习流水线。\n    \n    杰里·朱\n\n    [幻灯片](http:\u002F\u002Fs.interpretable.ml\u002Fnips_interpretable_ml_2017_jerry_zhu.pdf) · [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=XO2281l_JVw)\n\n  - 圆桌辩论及后续讨论\n    \n    扬·勒丘恩、基利安·温伯格、帕特里斯·西马尔和里奇·卡鲁阿纳。\n\n    [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=2hW05ZfsUUo)\n\n- ### [深度强化学习](https:\u002F\u002Fsites.google.com\u002Fview\u002Fdeeprl-symposium-nips2017\u002Fhome)\n    \n  皮特·阿贝尔 · 严端 · 戴维·西尔弗 · 萨廷德·辛格 · 奥俊赫 · 雷因·豪特霍夫\n\n  - 用深度强化学习掌握游戏\n    \n    戴维·西尔弗\n\n    [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=A3ekFcZ3KNw)\n\n  - 深度强化学习及其他领域的可重复性\n    \n    乔埃尔·皮诺\n\n    幻灯片 · 视频\n\n  - 神经地图：深度 RL 的结构化记忆\n    \n    鲁斯兰·萨拉胡丁诺夫\n\n    [幻灯片](http:\u002F\u002Fwww.cs.cmu.edu\u002F~rsalakhu\u002FNIPS2017_StructureMemoryForDeepRL.pdf)\n\n  - 通过随机化价值函数进行深度探索\n    \n    本·范·罗伊\n\n    幻灯片 · 视频\n  \n  - 人工智能孤注一掷\n    \n    迈克尔·鲍林    \n\n- ### [智能的种类：类型、测试与满足社会需求](http:\u002F\u002Fwww.kindsofintelligence.org\u002F)\n    \n  何塞·埃尔南德斯-奥拉略 · 祖宾·加拉马尼 · 托马索·A·波吉奥 · 阿德里安·韦勒 · 马修·克罗斯比\n\n  - 开幕致辞\n    \n    [幻灯片](https:\u002F\u002Fintelligence.webs.upv.es\u002Fslides\u002FNIPS-symposium-opening.pdf)\n\n  - 为什么心智会进化：真实景观中的导航进化\n    \n    露西亚·雅各布\n\n    幻灯片 · 视频\n\n  - 幼儿的独特智力：来自认知发展的 AI 洞见\n    \n    艾莉森·戈普尼克\n\n    [幻灯片](https:\u002F\u002Fintelligence.webs.upv.es\u002Fslides\u002FGopnik-NIPS.pptx)\n\n  - 从第一性原理学习\n    \n    德米斯·哈萨比斯\n\n    幻灯片 · 视频\n  \n  - 智能的类型：为什么类人 AI 很重要\n    \n    乔什·特南鲍姆   \n\n  - 通往通用人工智能的道路\n    \n    盖瑞·马库斯\n\n    [幻灯片](https:\u002F\u002Fintelligence.webs.upv.es\u002Fslides\u002FGopnik-NIPS.pptx)\n\n  - 视频游戏与协作式 AI 的道路\n    \n    卡佳·霍夫曼\n\n    [幻灯片](https:\u002F\u002Fintelligence.webs.upv.es\u002Fslides\u002F2017-12-07-Katja-Hofmann-symposium-kinds-of-intelligence.pdf) · 视频\n  \n  - 公平的问题\n    \n    辛西娅·德沃克\n\n    [幻灯片](https:\u002F\u002Fintelligence.webs.upv.es\u002Fslides\u002FNIPS2017-Dwork.pdf)\n \n  - 国家、企业、思考机器：人工代理与人工智能\n    \n    大卫·伦西曼\n\n    幻灯片 · 视频\n  \n  - 闭幕致辞\n    \n    [幻灯片](https:\u002F\u002Fintelligence.webs.upv.es\u002Fslides\u002FNIPS-symposium-closing.pdf)\n\n## WiML\n\n- **贝叶斯机器学习：大规模下的不确定性与鲁棒性量化**\n\n  塔玛拉·布罗德里克\n\n  幻灯片 · 视频 · 代码\n\n- **面向区域守护任务的通信中心多智能体深度强化学习**\n\n  艾什瓦里娅·乌尼克里希南\n\n  幻灯片 · 视频 · 代码\n\n- **图卷积网络可在基因组学深度学习模型中编码三维基因组结构**\n\n  佩顿·格林赛德\n\n  幻灯片 · 视频 · 代码\n\n- **社会科学中的机器学习**\n\n  汉娜·沃拉奇\n\n  幻灯片 · 视频 · 代码\n\n- **公平感知推荐系统**\n\n  帕拉克·阿加瓦尔\n\n  幻灯片 · 视频 · 代码\n\n- **奖励通道受损时的强化学习**\n\n  维多利亚·克拉基夫纳\n\n  幻灯片 · 视频 · 代码\n\n- **改善医疗保健：强化学习面临的挑战与机遇**\n\n  乔埃尔·皮诺\n\n  幻灯片 · 视频 · 代码\n\n- **利用深度强化学习中的对抗攻击提升鲁棒性**\n\n  唐振毅\n\n  幻灯片 · 视频 · 代码\n\n- **时间敏感型机器学习**\n\n  妮娜·米什拉\n\n  幻灯片 · 视频 · 代码\n\n- **重复拍卖中叫价机制评估的一般框架**\n\n  霍达·海达里\n\n  幻灯片 · 视频 · 代码\n\n- **调动专家智慧：基于狄利克雷过程的社会科学分歧性先验 elicitation 方法**\n\n  莎拉·布沙\n\n  幻灯片 · 视频 · 代码\n\n- **大型属性图中的表示学习**\n\n  内斯琳·K·艾哈迈德\n\n  [幻灯片](https:\u002F\u002Fwww.slideshare.net\u002FNesreenAhmed2\u002Frepresentation-learning-in-large-attributed-graphs) · 视频 · 代码","# NIPS 2017 资源库快速上手指南\n\n**工具简介**：`nips2017` 并非一个需要安装运行的软件包或代码库，而是一个由社区维护的 **NIPS 2017 会议资源索引仓库**。它汇集了该年度会议的特邀演讲（Invited Talks）、教程（Tutorials）和研讨会（Workshops）的幻灯片、视频链接及相关代码库地址。本指南旨在帮助开发者快速定位并获取所需的学习资料。\n\n## 环境准备\n\n由于本项目仅为资源列表，**无需配置特定的运行环境、操作系统或安装任何依赖库**。\n\n您只需要具备以下条件即可开始使用：\n*   **网络设备**：能够访问互联网（部分视频托管于 YouTube，国内用户可能需要网络代理才能观看；幻灯片多为 PDF 或在线网页，通常可直接访问）。\n*   **浏览工具**：现代 Web 浏览器（推荐 Chrome, Firefox, Edge）。\n*   **文档阅读器**：用于查看 `.pdf` 格式的幻灯片（如 Adobe Acrobat, 浏览器内置阅读器）。\n*   **开发环境（可选）**：如果某些条目提供了 \"Code\" 链接指向 GitHub 仓库，您需要在本地安装 `git` 以及对应的深度学习框架（如 TensorFlow, PyTorch 等）来运行示例代码。\n\n## 安装步骤\n\n本项目不需要执行 `pip install` 或 `npm install` 等安装命令。获取资源的“安装”过程即为访问仓库页面或克隆仓库到本地以便离线查阅链接。\n\n### 方式一：在线浏览（推荐）\n直接访问该项目的 GitHub 页面或原始 Markdown 文件，点击其中的链接即可跳转至资源所在地。\n\n### 方式二：克隆到本地（便于检索）\n如果您希望将资源列表保存到本地进行检索或贡献内容，可以使用以下命令：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fhindupuravinash\u002Fnips2017.git\ncd nips2017\n```\n\n*注：原仓库未提供中国镜像源，若克隆速度较慢，可尝试使用国内代码托管平台（如 Gitee）搜索是否有同步镜像，或直接下载 ZIP 包。*\n\n## 基本使用\n\n本仓库的核心用法是**根据分类查找链接**。以下是几种典型的使用场景：\n\n### 1. 查找特邀演讲视频与幻灯片\n在 `README.md` 的 **Invited Talks** 章节中，您可以找到行业大佬的分享资源。例如，想查看 Pieter Abbeel 关于“机器人深度学习”的演讲：\n\n*   **定位条目**：找到 `Deep Learning for Robotics`。\n*   **获取资源**：\n    *   点击 `[Slides]` 链接下载 PDF 幻灯片。\n    *   点击 `[Video]` 链接跳转至 YouTube 观看录像。\n    *   若显示 `Code`，通常意味着演讲中涉及的代码已开源，可顺着链接前往对应代码库。\n\n### 2. 学习特定主题的教程\n在 **Tutorials** 章节中，涵盖了从基础到高阶的主题。例如，想学习“最优传输（Optimal Transport）”：\n\n*   **定位条目**：找到 `A Primer on Optimal Transport`。\n*   **操作**：直接点击下载链接获取教学幻灯片 (`aprimeronOT.pdf`)。\n\n### 3. 深入研讨会议题\n**Workshops** 章节包含了更垂直领域的讨论（如贝叶斯深度学习、联邦学习等）。每个 Workshop 下列出了具体的演讲题目和讲者。\n\n*   **示例**：查找关于“联邦多任务学习”的资料。\n    *   进入 `ML Systems Workshop @ NIPS 2017` 部分。\n    *   找到 `Federated Multi-Task Learning` 条目。\n    *   点击下载链接获取 `mocha-NIPS.pdf` 幻灯片。\n\n### 4. 参与贡献（可选）\n如果您发现了新的资源链接或发现原有链接失效，可以通过以下方式贡献：\n\n```bash\n# 创建新分支\ngit checkout -b add-new-resource\n\n# 编辑 README.md 添加链接，保存后提交\ngit add README.md\ngit commit -m \"Add new slide link for [Talk Name]\"\n\n# 推送并发起 Pull Request\ngit push origin add-new-resource\n```\n\n**提示**：由于部分视频资源位于 YouTube，国内开发者若无法直接观看，建议在 Bilibili 等国内视频平台搜索对应的演讲者姓名和主题（如 \"NIPS 2017 Pieter Abbeel\"），往往能找到社区搬运的中文字幕版本。","某高校 AI 实验室的研究生团队正筹备关于“机器学习公平性”与“几何深度学习”的综述报告，急需获取 2017 年顶级会议的一手演讲资料。\n\n### 没有 nips2017 时\n- 研究人员需在 YouTube、Google Drive 及个人主页间反复跳转搜索，耗费数小时才能凑齐 Solon Barocas 或 Michael Bronstein 等专家的零散幻灯片。\n- 难以确认视频链接是否失效或缺失代码资源，导致复现教程中的算法时因缺少关键实现细节而受阻。\n- 无法系统性浏览当年所有 Invited Talks 和 Workshops 的全貌，极易遗漏如 Kate Crawford 关于“偏见”的重要观点，造成文献调研不完整。\n- 缺乏统一的索引目录，团队成员间共享资料时需手动整理冗长的 URL 列表，协作效率低下且容易出错。\n\n### 使用 nips2017 后\n- 团队直接在仓库的\"Tutorials\"章节一键获取“机器学习公平性”和“图上的几何深度学习”等专题的幻灯片、高清视频及配套代码链接。\n- 每个条目均经过验证，明确标注了资源状态（如 Slides·Video·Code），确保下载的资料完整可用，加速了算法复现进程。\n- 通过清晰的分类目录（Invited Talks\u002FWorkshops 等），快速定位到 John Platt 或 Pieter Abbeel 等大咖的主题演讲，构建了完整的知识图谱。\n- 利用现成的结构化清单，成员可立即将标准化链接同步至内部文档，大幅减少了资料搜集与整理的时间成本。\n\nnips2017 将分散的顶级会议资源聚合为统一的知识库，让科研人员从繁琐的“找资料”转变为高效的“用资料”。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fhindupuravinash_nips2017_51f52dab.jpg","hindupuravinash","Avinash Hindupur","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fhindupuravinash_a15fddca.jpg","Machine Learning | Data | Developer","@idam-ai ","Hyderabad",null,"https:\u002F\u002Fwww.idam.ai","https:\u002F\u002Fgithub.com\u002Fhindupuravinash",887,192,"2025-07-19T15:54:43",1,"","未说明",{"notes":91,"python":89,"dependencies":92},"该仓库并非可执行的 AI 软件工具，而是 NIPS 2017 会议的演讲、教程和研讨会资源（幻灯片、视频链接）汇总列表。因此，它不需要特定的操作系统、GPU、内存、Python 版本或依赖库即可‘运行’。用户只需通过浏览器访问 README 中提供的链接即可查看相关内容。",[],[13],[95,96,97,98],"deep-learning","machine-learning","nips-2017","neural-networks","2026-03-27T02:49:30.150509","2026-04-06T08:42:03.518513",[102,107,112],{"id":103,"question_zh":104,"answer_zh":105,"source_url":106},16689,"如何下载幻灯片为 PPTX 或 PDF 格式？","您可以在该仓库的 README 文件中找到幻灯片及配套视频的下载链接。或者直接访问 Google Drive 链接下载：https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1SuwiICLERd7SfYo3FiqNG0tCEBUjKcT7\u002Fview。目前仓库中也已提供可下载的 PDF 版本。","https:\u002F\u002Fgithub.com\u002Fhindupuravinash\u002Fnips2017\u002Fissues\u002F7",{"id":108,"question_zh":109,"answer_zh":110,"source_url":111},16690,"几何深度学习（Geometric Deep Learning）的幻灯片链接失效或显示 404 错误怎么办？","如果链接失效，通常是因为网站暂时维护或流量异常导致。请尝试访问 README 中提供的最新链接：http:\u002F\u002Fgeometricdeeplearning.com\u002Fslides\u002FNIPS-GDL.pdf（注意不要使用旧的 IP 地址链接）。如果仍然无法访问，可能是临时问题，稍后重试即可，维护者也会及时更新链接。","https:\u002F\u002Fgithub.com\u002Fhindupuravinash\u002Fnips2017\u002Fissues\u002F6",{"id":113,"question_zh":114,"answer_zh":115,"source_url":116},16691,"为什么我访问的幻灯片链接被重定向到了错误的地址？","这通常是因为原网站正在进行维修或受到异常流量限制，导致临时重定向。正确的链接应指向 http:\u002F\u002Fgeometricdeeplearning.com\u002Fslides\u002FNIPS-GDL.pdf。请检查您是否使用了 README 中提供的最新链接，旧链接可能已不再有效。","https:\u002F\u002Fgithub.com\u002Fhindupuravinash\u002Fnips2017\u002Fissues\u002F13",[]]