[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-lopusz--awesome-interpretable-machine-learning":3,"similar-lopusz--awesome-interpretable-machine-learning":54},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":7,"owner_avatar_url":15,"owner_bio":7,"owner_company":7,"owner_location":7,"owner_email":7,"owner_twitter":7,"owner_website":16,"owner_url":17,"languages":18,"stars":27,"forks":28,"last_commit_at":29,"license":7,"difficulty_score":30,"env_os":31,"env_gpu":32,"env_ram":32,"env_deps":33,"category_tags":36,"github_topics":40,"view_count":48,"oss_zip_url":7,"oss_zip_packed_at":7,"status":49,"created_at":50,"updated_at":51,"faqs":52,"releases":53},9415,"lopusz\u002Fawesome-interpretable-machine-learning","awesome-interpretable-machine-learning",null,"awesome-interpretable-machine-learning 是一份精心整理的开源资源清单，旨在帮助开发者和研究人员深入理解机器学习模型的内部运作机制。在人工智能日益复杂的今天，许多高精度模型如同“黑盒”，难以解释其决策依据，这在医疗、金融等高风险领域尤为棘手。这份清单正是为了解决模型透明度与可解释性难题而生，它系统性地汇集了从基础理论到前沿实践的各类资料。\n\n内容涵盖三大核心板块：首先是“可解释模型”，介绍了决策树、规则集及线性回归等天生具备透明度的算法；其次是“特征重要性”，提供了随机森林、提升树等模型中评估变量影响力的方法与论文，甚至探讨了通用型的模型无关度量技术；最后是“特征选择”，梳理了筛选关键输入变量的各类策略。无论是希望构建可信 AI 系统的算法工程师，还是致力于研究模型公平性与鲁棒性的学者，都能从中找到极具价值的参考文献、代码实现和技术讨论。通过整合经典论文与现代工具，awesome-interpretable-machine-learning 让复杂的模型决策过程变得清晰可见，是通往可信赖人工智能的重要指南。","* Awesome Interpretable Machine Learning [[https:\u002F\u002Fawesome.re][https:\u002F\u002Fawesome.re\u002Fbadge.svg]]\n\nOpinionated list of resources facilitating model interpretability\n(introspection, simplification, visualization, explanation).\n\n** Interpretable Models\n   + Interpretable models\n     + Simple decision trees\n     + Rules\n     + (Regularized) linear regression\n     + k-NN\n\n   + (2008) Predictive learning via rule ensembles by Jerome H. Friedman, Bogdan E. Popescu\n     + https:\u002F\u002Fdx.doi.org\u002F10.1214\u002F07-AOAS148\n\n   + (2014) Comprehensible classification models by Alex A. Freitas\n     + https:\u002F\u002Fdx.doi.org\u002F10.1145\u002F2594473.2594475\n     + http:\u002F\u002Fwww.kdd.org\u002Fexploration_files\u002FV15-01-01-Freitas.pdf\n     + Interesting discussion of interpretability for a few  classification  models\n       (decision trees, classification rules, decision tables, nearest neighbors  and  Bayesian  network  classifier)\n\n   + (2015) Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model by Benjamin Letham, Cynthia Rudin, Tyler H. McCormick, David Madigan\n     + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1511.01644\n     + https:\u002F\u002Fdx.doi.org\u002F10.1214\u002F15-AOAS848\n\n   + (2017) Learning Explanatory Rules from Noisy Data by Richard Evans, Edward Grefenstette\n     + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.04574\n\n   + (2019) Transparent Classification with Multilayer Logical Perceptrons and Random Binarization by Zhuo Wang, Wei Zhang, Ning Liu, Jianyong Wang\n     + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.04695\n     + Code: https:\u002F\u002Fgithub.com\u002F12wang3\u002Fmllp\n\n** Feature Importance\n   + Models offering feature importance measures\n     + Random forest\n     + Boosted trees\n     + Extremely randomized trees\n       + (2006) Extremely randomized trees by Pierre Geurts, Damien Ernst, Louis Wehenkel\n         + https:\u002F\u002Fdx.doi.org\u002F10.1007\u002Fs10994-006-6226-1\n     + Random ferns\n       + (2015) rFerns: An Implementation of the Random Ferns Method for General-Purpose Machine Learning by Miron B. Kursa\n         + https:\u002F\u002Fdx.doi.org\u002F10.18637\u002Fjss.v061.i10\n         + https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002FrFerns\n         + https:\u002F\u002Fnotabug.org\u002Fmbq\u002FrFerns\n     + Linear regression (with a grain of salt)\n\n   + (2007) Bias in random forest variable importance measures: Illustrations, sources and a solution by Carolin Strobl, Anne-Laure Boulesteix, Achim Zeileis, Torsten Hothorn\n     + https:\u002F\u002Fdx.doi.org\u002F10.1186\u002F1471-2105-8-25\n\n   + (2008) Conditional Variable Importance for Random Forests by Carolin Strobl, Anne-Laure Boulesteix, Thomas Kneib, Thomas Augustin, Achim Zeileis\n     + https:\u002F\u002Fdx.doi.org\u002F10.1186\u002F1471-2105-9-307\n\n   + (2018) Model Class Reliance: Variable Importance Measures for any Machine Learning Model Class, from the \"Rashomon\" Perspective by Aaron Fisher, Cynthia Rudin, Francesca Dominici\n     + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1801.01489\n     + https:\u002F\u002Fgithub.com\u002Faaronjfisher\u002Fmcr\n     + Universal (model agnostic) variable importance measure\n\n   + (2019) Please Stop Permuting Features: An Explanation and Alternatives by Giles Hooker, Lucas Mentch\n     + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.03151\n     + Paper advocating against feature permutation for importance\n\n   + (2018) Visualizing the Feature Importance for Black Box Models by Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl\n     + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.06620\n     + https:\u002F\u002Fgithub.com\u002Fgiuseppec\u002FfeatureImportance\n     + Global and local (model agnostic) variable importance measure (based on Model Reliance)\n\n   + Very good blog post describing deficiencies of random forest feature importance and the permutation importance\n     + http:\u002F\u002Fexplained.ai\u002Frf-importance\u002Findex.html\n\n   + Permutation importance - simple model agnostic approach is described in Eli5 documentation\n     + https:\u002F\u002Feli5.readthedocs.io\u002Fen\u002Flatest\u002Fblackbox\u002Fpermutation_importance.html\n\n** Feature Selection\n   + Classification of feature selection methods\n     + Filters\n     + Wrappers\n     + Embedded methods\n\n   + (2003) An Introduction to Variable and Feature Selection by Isabelle Guyon, André Elisseeff\n     + http:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume3\u002Fguyon03a\u002Fguyon03a.pdf\n     + Be sure to read this very illustrative introduction to feature selection\n\n   + Filter Methods\n\n     + (2006) On the Use of Variable Complementarity for Feature Selection in Cancer Classification by Patrick Meyer, Gianluca Bontempi\n       + https:\u002F\u002Fdx.doi.org\u002F10.1007\u002F11732242_9\n       + https:\u002F\u002Fpdfs.semanticscholar.org\u002Fd72f\u002Ff5063520ce4542d6d9b9e6a4f12aafab6091.pdf\n       + Introduces information theoretic methods - double input symmetrical relevance (DISR)\n\n     + (2012) Conditional Likelihood Maximisation: A Unifying Framework for Information Theoretic Feature Selection by Gavin Brown, Adam Pocock, Ming-Jie Zhao, Mikel Luján\n       + http:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume13\u002Fbrown12a\u002Fbrown12a.pdf\n       + Code: https:\u002F\u002Fgithub.com\u002FCraigacp\u002FFEAST\n       + Discusses various approaches based on mutual information (MIM, mRMR, MIFS, CMIM, JMI, DISR, ICAP, CIFE, CMI)\n\n     + (2012) Feature selection via joint likelihood by Adam Pocock\n       + http:\u002F\u002Fwww.cs.man.ac.uk\u002F~gbrown\u002Fpublications\u002FpocockPhDthesis.pdf\n\n     + (2017) Relief-Based Feature Selection: Introduction and Review by Ryan J. Urbanowicz, Melissa Meeker, William LaCava, Randal S. Olson, Jason H. Moore\n       + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.08421\n\n     + (2017) Benchmarking Relief-Based Feature Selection Methods for Bioinformatics Data Mining by Ryan J. Urbanowicz, Randal S. Olson, Peter Schmitt, Melissa Meeker, Jason H. Moore\n       + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.08477\n\n   + Wrapper methods\n\n     + (2015) Feature Selection with theBorutaPackage by Miron B. Kursa, Witold R. Rudnicki\n       + https:\u002F\u002Fdx.doi.org\u002F10.18637\u002Fjss.v036.i11\n       + https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002FBoruta\u002F\n       + Code (official, R): https:\u002F\u002Fnotabug.org\u002Fmbq\u002FBoruta\u002F\n       + Code (Python): https:\u002F\u002Fgithub.com\u002Fscikit-learn-contrib\u002Fboruta_py\n\n     + Boruta for those in a hurry\n       + https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002FBoruta\u002Fvignettes\u002Finahurry.pdf\n\n   + General\n\n     + (1994) Irrelevant Features and the Subset Selection Problem by George John, Ron Kohavi, Karl Pfleger\n       + https:\u002F\u002Fpdfs.semanticscholar.org\u002Fa83b\u002Fddb34618cc68f1014ca12eef7f537825d104.pdf\n       + Classic paper discussing weakly relevant features, irrelevant features, strongly relevant features\n\n     + (2003) Special issue of JMLR of feature selection - oldish (2003)\n       + http:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fspecial\u002Ffeature03.html\n\n     + (2004) Result Analysis of the NIPS 2003 Feature Selection Challenge by Isabelle Guyon, Steve Gunn, Asa Ben-Hur, Gideon Dror\n       + Paper: https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2728-result-analysis-of-the-nips-2003-feature-selection-challenge.pdf\n       + Website http:\u002F\u002Fclopinet.com\u002Fisabelle\u002FProjects\u002FNIPS2003\u002F\n\n     + (2007) Consistent Feature Selection for Pattern Recognition in Polynomial Time by Roland Nilsson, José Peña, Johan Björkegren, Jesper Tegnér\n       + http:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume8\u002Fnilsson07a\u002Fnilsson07a.pdf\n       + Discusses minimal optimal vs all-relevant approaches to feature selection\n\n   + Feature Engineering and Selection by Kuhn & Johnson\n     + Sligtly off-topic, but very interesting book\n     + http:\u002F\u002Fwww.feat.engineering\u002Findex.html\n     + https:\u002F\u002Fbookdown.org\u002Fmax\u002FFES\u002F\n     + https:\u002F\u002Fgithub.com\u002Ftopepo\u002FFES\n\n   + Feature Engineering presentation by H. J. van Veen\n     + Slightly off-topicm but very interesting deck of slides\n     + Slides: https:\u002F\u002Fwww.slideshare.net\u002FHJvanVeen\u002Ffeature-engineering-72376750\n\n** Model Explanations\n*** Philosophy\n    + Magnets by R. P. Feynman\n      https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=wMFPe-DwULM\n\n    + (2002) Looking Inside the Black Box, presentation of Leo Breiman\n      + https:\u002F\u002Fwww.stat.berkeley.edu\u002Fusers\u002Fbreiman\u002Fwald2002-2.pdf\n\n    + (2011) To Explain or to Predict? by Galit Shmueli\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1101.0891\n      + https:\u002F\u002Fdx.doi.org\u002F10.1214\u002F10-STS330\n\n    + (2016) The Mythos of Model Interpretability by Zachary C. Lipton\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1606.03490\n      + https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=mvzBQci04qA\n\n    + (2017) Towards A Rigorous Science of Interpretable Machine Learning by Finale Doshi-Velez, Been Kim\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1702.08608\n\n    + (2017) The Promise and Peril of Human Evaluation for Model Interpretability by Bernease Herman\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.07414\n\n    + (2018) [[http:\u002F\u002Fbayes.cs.ucla.edu\u002FWHY\u002Fwhy-intro.pdf][The Book of Why: The New Science of Cause and Effect]] by Judea Pearl\n\n    + (2018) Please Stop Doing the \"Explainable\" ML by Cynthia Rudin\n      + Video (starts 17:30, lasts 10 min): https:\u002F\u002Fzoom.us\u002Frecording\u002Fplay\u002F0y-iI9HamgyDzzP2k_jiTu6jB7JgVVXnjWZKDMbnyRTn3FsxTDZy6Wkrj3_ekx4J\n      + Linked at: https:\u002F\u002Fusers.cs.duke.edu\u002F~cynthia\u002Fmediatalks.html\n\n    + (2018) Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning by Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, Lalana Kagal\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1806.00069\n\n    + (2019) Interpretable machine learning: definitions, methods, and applications by W. James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, Bin Yu\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1901.04592\n\n    + (2019) On Explainable Machine Learning Misconceptions A More Human-Centered Machine Learning by Patrick Hall\n      + https:\u002F\u002Fgithub.com\u002Fjphall663\u002Fxai_misconceptions\u002Fblob\u002Fmaster\u002Fxai_misconceptions.pdf\n      + https:\u002F\u002Fgithub.com\u002Fjphall663\u002Fxai_misconceptions\n\n    + (2019) An Introduction to Machine Learning Interpretability. An Applied Perspective on Fairness, Accountability, Transparency, and Explainable AI by Patrick Hall and Navdeep Gill\n      + https:\u002F\u002Fwww.h2o.ai\u002Fwp-content\u002Fuploads\u002F2019\u002F08\u002FAn-Introduction-to-Machine-Learning-Interpretability-Second-Edition.pdf\n\n*** Model Agnostic Explanations\n    + (2009) How to Explain Individual Classification Decisions by David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, Klaus-Robert Mueller\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F0912.1128\n\n    + (2013) Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation by Alex Goldstein, Adam Kapelner, Justin Bleich, Emil Pitkin\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1309.6392\n\n    + (2016) \"Why Should I Trust You?\": Explaining the Predictions of Any Classifier by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1602.04938\n      + Code: https:\u002F\u002Fgithub.com\u002Fmarcotcr\u002Flime\n      + https:\u002F\u002Fgithub.com\u002Fmarcotcr\u002Flime-experiments\n      + https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=bCgEP2zuYxI\n      + Introduces the LIME method (Local Interpretable Model-agnostic Explanations)\n\n    + (2016) A Model Explanation System: Latest Updates and Extensions by Ryan Turner\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1606.09517\n      + http:\u002F\u002Fwww.blackboxworkshop.org\u002Fpdf\u002FTurner2015_MES.pdf\n\n    + (2017) Understanding Black-box Predictions via Influence Functions by Pang Wei Koh, Percy Liang\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1703.04730\n\n    + (2017) A Unified Approach to Interpreting Model Predictions by Scott Lundberg, Su-In Lee\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.07874\n      + Code: https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\n      + Introduces the SHAP method (SHapley Additive exPlanations), generalizing LIME\n\n    + (2018) Anchors: High-Precision Model-Agnostic Explanations by Marco Ribeiro, Sameer Singh, Carlos Guestrin\n      + https:\u002F\u002Fhomes.cs.washington.edu\u002F~marcotcr\u002Faaai18.pdf\n      + Code: https:\u002F\u002Fgithub.com\u002Fmarcotcr\u002Fanchor-experiments\n\n    + (2018) Learning to Explain: An Information-Theoretic Perspective on Model Interpretation by Jianbo Chen, Le Song, Martin J. Wainwright, Michael I. Jordan\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.07814\n\n    + (2018) Explanations of model predictions with live and breakDown packages by Mateusz Staniak, Przemyslaw Biecek\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.01955\n      + Docs: https:\u002F\u002Fmi2datalab.github.io\u002Flive\u002F\n      + Code: https:\u002F\u002Fgithub.com\u002FMI2DataLab\u002Flive\n      + Docs: https:\u002F\u002Fpbiecek.github.io\u002FbreakDown\n      + Code: https:\u002F\u002Fgithub.com\u002Fpbiecek\u002FbreakDown\n\n    + (2018) A review book -  Interpretable Machine Learning. A Guide for Making Black Box\n      Models Explainable by Christoph Molnar\n\n      + https:\u002F\u002Fchristophm.github.io\u002Finterpretable-ml-book\u002F\n    + (2018) Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead by Cynthia Rudin\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.10154\n    + (2019) Quantifying Interpretability of Arbitrary Machine Learning Models Through Functional Decomposition by Christoph Molnar, Giuseppe Casalicchio, Bernd Bischl\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.03867\n\n*** Model Specific Explanations - Neural Networks\n    + (2013) Visualizing and Understanding Convolutional Networks by Matthew D Zeiler, Rob Fergus\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1311.2901\n\n    + (2013) Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps by Karen Simonyan, Andrea Vedaldi, Andrew Zisserman\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1312.6034\n\n    + (2015) Understanding Neural Networks Through Deep Visualization by Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, Hod Lipson\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1506.06579\n      + https:\u002F\u002Fgithub.com\u002Fyosinski\u002Fdeep-visualization-toolbox\n\n    + (2016) Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization by Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1610.02391\n\n    + (2016) Generating Visual Explanations by Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, Trevor Darrell\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1603.08507\n\n    + (2016) Rationalizing Neural Predictions by Tao Lei, Regina Barzilay, Tommi Jaakkola\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1606.04155\n      + https:\u002F\u002Fpeople.csail.mit.edu\u002Ftaolei\u002Fpapers\u002Femnlp16_rationale_slides.pdf\n      + Code: https:\u002F\u002Fgithub.com\u002Ftaolei87\u002Frcnn\u002Ftree\u002Fmaster\u002Fcode\u002Frationale\n\n    + (2016) Gradients of Counterfactuals by Mukund Sundararajan, Ankur Taly, Qiqi Yan\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.02639\n\n    + Pixel entropy can be used to detect relevant picture regions (for CovNets)\n      + See Visualization section and Fig. 5 of the paper\n        + (2017) High-Resolution Breast Cancer Screening with Multi-View Deep Convolutional Neural Networks by Krzysztof J. Geras, Stacey Wolfson, Yiqiu Shen, Nan Wu, S. Gene Kim, Eric Kim, Laura Heacock, Ujas Parikh, Linda Moy, Kyunghyun Cho\n          + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1703.07047\n\n    + (2017) SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability by Maithra Raghu, Justin Gilmer, Jason Yosinski, Jascha Sohl-Dickstein\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.05806\n      + https:\u002F\u002Fresearch.googleblog.com\u002F2017\u002F11\u002Finterpreting-deep-neural-networks-with.html\n\n    + (2017) Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks by Jose Oramas, Kaili Wang, Tinne Tuytelaars\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1712.06302\n\n    + (2017) Axiomatic Attribution for Deep Networks by Mukund Sundararajan, Ankur Taly, Qiqi Yan\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1703.01365\n      + Code: https:\u002F\u002Fgithub.com\u002Fankurtaly\u002FIntegrated-Gradients\n      + Proposes Integrated Gradients Method\n      + See also: Gradients of Counterfactuals https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.02639.pdf\n\n    + (2017) Learning Important Features Through Propagating Activation Differences by Avanti Shrikumar, Peyton Greenside, Anshul Kundaje\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1704.02685\n\n      + Proposes Deep Lift method\n\n      + Code: https:\u002F\u002Fgithub.com\u002Fkundajelab\u002Fdeeplift\n\n      + Videos: https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLJLjQOkqSRTP3cLB2cOOi_bQFw6KPGKML\n\n    + (2017) The (Un)reliability of saliency methods by Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.0867\n      + Review of failures for methods extracting most important pixels for prediction\n\n    + (2018) Classifier-agnostic saliency map extraction by Konrad Zolna, Krzysztof J. Geras, Kyunghyun Cho\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.08249\n      + Code: https:\u002F\u002Fgithub.com\u002Fkondiz\u002Fcasme\n\n    + (2018) A Benchmark for Interpretability Methods in Deep Neural Networks by Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1806.10758\n\n    + (2018) The Building Blocks of Interpretability by Chris Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye, Alexander Mordvintsev\n      + https:\u002F\u002Fdx.doi.org\u002F10.23915\u002Fdistill.00010\n      + Has some embeded links to notebooks\n      + Uses Lucid library https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Flucid\n\n    + (2018) Hierarchical interpretations for neural network predictions by Chandan Singh, W. James Murdoch, Bin Yu\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1806.05337\n      + Code: https:\u002F\u002Fgithub.com\u002Fcsinva\u002Fhierarchical_dnn_interpretations\n\n    + (2018) iNNvestigate neural networks! by Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1808.04260\n      + Code: https:\u002F\u002Fgithub.com\u002Falbermax\u002Finnvestigate\n\n    + (2018) YASENN: Explaining Neural Networks via Partitioning Activation Sequences by Yaroslav Zharov, Denis Korzhenkov, Pavel Shvechikov, Alexander Tuzhilin\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.02783\n\n    + (2019) Attention is not Explanation by Sarthak Jain, Byron C. Wallace\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1902.10186\n\n    + (2019) Attention Interpretability Across NLP Tasks by Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Tomar, Manaal Faruqui\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.11218\n\n    + (2019) GRACE: Generating Concise and Informative Contrastive Sample to Explain Neural Network Model's Prediction by Thai Le, Suhang Wang, Dongwon Lee\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.02042\n      + Code: https:\u002F\u002Fgithub.com\u002Flethaiq\u002FGRACE_KDD20\n\n** Extracting Interpretable Models From Complex Ones\n\n   + (2017) Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples by Gail Weiss, Yoav Goldberg, Eran Yahav\n     + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.09576\n\n   + (2017) Distilling a Neural Network Into a Soft Decision Tree by Nicholas Frosst, Geoffrey Hinton\n     + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.09784\n\n   + (2017) Detecting Bias in Black-Box Models Using Transparent Model Distillation by Sarah Tan, Rich Caruana, Giles Hooker, Yin Lou\n     + http:\u002F\u002Fwww.aies-conference.com\u002F2018\u002Fcontents\u002Fpapers\u002Fmain\u002FAIES_2018_paper_96.pdf\n\n** Model Visualization\n   + Visualizing Statistical Models: Removing the blindfold\n     + http:\u002F\u002Fhad.co.nz\u002Fstat645\u002Fmodel-vis.pdf\n\n   + Partial dependence plots\n     + http:\u002F\u002Fscikit-learn.org\u002Fstable\u002Fauto_examples\u002Fensemble\u002Fplot_partial_dependence.html\n     + pdp: An R Package for Constructing Partial Dependence Plots\n       https:\u002F\u002Fjournal.r-project.org\u002Farchive\u002F2017\u002FRJ-2017-016\u002FRJ-2017-016.pdf\n       https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Fpdp\u002Findex.html\n\n   + ggfortify: Unified Interface to Visualize Statistical Results of Popular R Packages\n     + https:\u002F\u002Fjournal.r-project.org\u002Farchive\u002F2016-2\u002Ftang-horikoshi-li.pdf\n     + CRAN https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Fggfortify\u002Findex.html\n\n   + RandomForestExplainer\n     + Master thesis https:\u002F\u002Frawgit.com\u002FgeneticsMiNIng\u002FBlackBoxOpener\u002Fmaster\u002FrandomForestExplainer_Master_thesis.pdf\n     + R code\n       + CRAN https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002FrandomForestExplainer\u002Findex.html\n       + Code: https:\u002F\u002Fgithub.com\u002FMI2DataLab\u002FrandomForestExplainer\n\n   + ggRandomForest\n     + Paper (vignette) https:\u002F\u002Fgithub.com\u002Fehrlinger\u002FggRandomForests\u002Fraw\u002Fmaster\u002Fvignettes\u002FrandomForestSRC-Survival.pdf\n     + R code\n       + CRAN https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002FggRandomForests\u002Findex.html\n       + Code: https:\u002F\u002Fgithub.com\u002Fehrlinger\u002FggRandomForests\n\n** Selected Review Talks and Tutorials\n   + Tutorial on Interpretable machine learning at ICML 2017\n     + Slides: http:\u002F\u002Fpeople.csail.mit.edu\u002Fbeenkim\u002Fpapers\u002FBeenK_FinaleDV_ICML2017_tutorial.pdf\n\n   + P. Biecek, Show Me Your Model - Tools for Visualisation of Statistical Models\n     + Video: https:\u002F\u002Fchannel9.msdn.com\u002FEvents\u002FuseR-international-R-User-conferences\u002FuseR-International-R-User-2017-Conference\u002FShow-Me-Your-Model-tools-for-visualisation-of-statistical-models\n\n   + S. Ritchie, Just-So Stories of AI\n     + Video: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=DiWkKqZChF0\n     + Slides: https:\u002F\u002Fspeakerdeck.com\u002Fsritchie\u002Fjust-so-stories-for-ai-explaining-black-box-predictions\n\n   + C. Jarmul, Towards Interpretable Accountable Models\n     + Video: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=B3PtcF-6Dtc\n     + Slides: https:\u002F\u002Fdocs.google.com\u002Fpresentation\u002Fd\u002Fe\u002F2PACX-1vR05kpagAbL5qo1QThxwu44TI5SQAws_UFVg3nUAmKp39uNG0xdBjcMA-VyEeqZRGGQtt0CS5h2DMTS\u002Fembed?start=false&loop=false&delayms=3000\n\n   + I. Oszvald, Machine Learning Libraries You'd Wish You'd Known About\n     + A large part of the talk covers model explanation and visualization\n     + Video: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=nDF7_8FOhpI\n     + Associated notebook on explaining regression predictions: https:\u002F\u002Fgithub.com\u002Fianozsvald\u002Fdata_science_delivered\u002Fblob\u002Fmaster\u002Fml_explain_regression_prediction.ipynb\n\n   + G. Varoquaux, Understanding and diagnosing your machine-learning models (covers PDP and Lime among others)\n     + Video: https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=kbj3llSbaVA\n     + Slides: http:\u002F\u002Fgael-varoquaux.info\u002Finterpreting_ml_tuto\u002F\n\n** Venues\n   + Interpretable ML Symposium (NIPS 2017) (contains links to *papers*, *slides* and *videos*)\n     + http:\u002F\u002Finterpretable.ml\u002F\n     + Debate, Interpretability is necessary in machine learning\n       + https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=2hW05ZfsUUo\n   + Workshop on Human Interpretability in Machine Learning (WHI), organised in conjunction with ICML\n     + 2018 (contains links to *papers* and *slides*)\n       + https:\u002F\u002Fsites.google.com\u002Fview\u002Fwhi2018\n       + Proceedings https:\u002F\u002Farxiv.org\u002Fhtml\u002F1807.01308\n     + 2017 (contains links to *papers* and *slides*)\n       + https:\u002F\u002Fsites.google.com\u002Fview\u002Fwhi2017\u002Fhome\n       + Proceedings https:\u002F\u002Farxiv.org\u002Fhtml\u002F1708.02666\n     + 2016 (contains links to *papers*)\n       + https:\u002F\u002Fsites.google.com\u002Fsite\u002F2016whi\u002F\n       + Proceedings https:\u002F\u002Farxiv.org\u002Fhtml\u002F1607.02531 or [[https:\u002F\u002Fdrive.google.com\u002Fopen?id=0B9mGJ4F63iKGZWk0cXZraTNjRVU][here]]\n   + Analyzing and interpreting neural networks for NLP (BlackboxNLP), organised in conjunction with EMNLP\n     + 2019 (links below may get prefixed by 2019 later on)\n       + https:\u002F\u002Fblackboxnlp.github.io\u002F\n       + https:\u002F\u002Fblackboxnlp.github.io\u002Fprogram.html\n       + Papers should be available on arXiv\n     + 2018\n       + https:\u002F\u002Fblackboxnlp.github.io\u002F2018\n       + https:\u002F\u002Fblackboxnlp.github.io\u002Fprogram.html\n       + [[https:\u002F\u002Farxiv.org\u002Fsearch\u002Fadvanced?advanced=&terms-0-operator=AND&terms-0-term=BlackboxNLP&terms-0-field=comments&terms-1-operator=OR&terms-1-term=Analyzing+interpreting+neural+networks+NLP&terms-1-field=comments&classification-physics_archives=all&date-filter_by=all_dates&date-year=&date-from_date=&date-to_date=&date-date_type=submitted_date&abstracts=show&size=200&order=-announced_date_first][List of papers]]\n   + FAT\u002FML Fairness, Accountability, and Transparency in Machine Learning [[https:\u002F\u002Fwww.fatml.org\u002F]]\n     + 2018\n       + https:\u002F\u002Fwww.fatml.org\u002Fschedule\u002F2018\n     + 2017\n       + https:\u002F\u002Fwww.fatml.org\u002Fschedule\u002F2017\n     + 2016\n       + https:\u002F\u002Fwww.fatml.org\u002Fschedule\u002F2016\n     + 2016\n       + https:\u002F\u002Fwww.fatml.org\u002Fschedule\u002F2016\n     + 2015\n       + https:\u002F\u002Fwww.fatml.org\u002Fschedule\u002F2015\n     + 2014\n       + https:\u002F\u002Fwww.fatml.org\u002Fschedule\u002F2014\n    + AAAI\u002FACM Annual Conferenceon AI, Ethics, and Society\n      + 2019 (links below may get prefixed by 2019 later on)\n        + http:\u002F\u002Fwww.aies-conference.com\u002Faccepted-papers\u002F\n      + 2018\n        + http:\u002F\u002Fwww.aies-conference.com\u002F2018\u002Faccepted-papers\u002F\n        + http:\u002F\u002Fwww.aies-conference.com\u002F2018\u002Faccepted-student-papers\u002F\n** Software\n   Software related to papers is mentioned along with each publication.\n   Here only standalone software is included.\n\n   + DALEX - R package, Descriptive mAchine Learning EXplanations\n     + CRAN https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002FDALEX\u002FDALEX.pdf\n     + Code: https:\u002F\u002Fgithub.com\u002Fpbiecek\u002FDALEX\n\n   + ELI5 - Python package dedicated to debugging machine learning classifiers\n     and explaining their predictions\n     + Code: https:\u002F\u002Fgithub.com\u002FTeamHG-Memex\u002Feli5\n     + https:\u002F\u002Feli5.readthedocs.io\u002Fen\u002Flatest\u002F\n\n   + forestmodel - R package visualizing coefficients of different models with the so called forest plot\n     + CRAN https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Fforestmodel\u002Findex.html\n     + Code: https:\u002F\u002Fgithub.com\u002FNikNakk\u002Fforestmodel\n\n   + fscaret - R package with automated Feature Selection from 'caret'\n     + CRAN https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Ffscaret\u002F\n     + Tutorial: https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Ffscaret\u002Fvignettes\u002Ffscaret.pdf\n\n   + iml - R package for Interpretable Machine Learning\n     + CRAN https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Fiml\u002F\n     + Code: https:\u002F\u002Fgithub.com\u002FchristophM\u002Fiml\n     + Publication: http:\u002F\u002Fjoss.theoj.org\u002Fpapers\u002F10.21105\u002Fjoss.00786\n\n   + interpret - Python package package for training interpretable models and explaining blackbox systems by Microsoft\n     + Code: https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Finterpret\n\n   + lime - R package implementing LIME\n     + https:\u002F\u002Fgithub.com\u002Fthomasp85\u002Flime\n\n   + lofo-importance - Python package feature importance by Leave One Feature Out Importance method\n     + Code: https:\u002F\u002Fgithub.com\u002Faerdem4\u002Flofo-importance\n\n   + Lucid - a collection of infrastructure and tools for research in neural network interpretability\n     + Code: https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Flucid\n\n   + praznik - R package with a collection of feature selection filters performing greedy optimisation of mutual information-based usefulness criteria, see JMLR 13, 27−66 (2012)\n     + CRAN https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Fpraznik\u002Findex.html\n     + Code: https:\u002F\u002Fnotabug.org\u002Fmbq\u002Fpraznik\n\n   + yellowbrick - Python package offering visual analysis and diagnostic tools to facilitate machine learning model selection\n     + Code: https:\u002F\u002Fgithub.com\u002FDistrictDataLabs\u002Fyellowbrick\n     + http:\u002F\u002Fwww.scikit-yb.org\u002Fen\u002Flatest\u002F\n\n** Other Resources\n   + *Awesome* list of resources by Patrick Hall\n     + https:\u002F\u002Fgithub.com\u002Fjphall663\u002Fawesome-machine-learning-interpretability\n   + *Awesome* XAI resources by Przemysław Biecek\n     + https:\u002F\u002Fgithub.com\u002Fpbiecek\u002Fxai_resources\n","* 令人惊叹的可解释机器学习 [[https:\u002F\u002Fawesome.re][https:\u002F\u002Fawesome.re\u002Fbadge.svg]]\n\n一份带有观点的资源列表，旨在促进模型的可解释性\n（包括内省、简化、可视化和解释）。\n\n** 可解释模型\n   + 可解释模型\n     + 简单决策树\n     + 规则\n     + （正则化）线性回归\n     + k-NN\n\n   + (2008) 杰罗姆·H·弗里德曼、博格丹·E·波佩斯库的《基于规则集成的预测学习》\n     + https:\u002F\u002Fdx.doi.org\u002F10.1214\u002F07-AOAS148\n\n   + (2014) 亚历克斯·A·弗雷塔斯的《可理解的分类模型》\n     + https:\u002F\u002Fdx.doi.org\u002F10.1145\u002F2594473.2594475\n     + http:\u002F\u002Fwww.kdd.org\u002Fexploration_files\u002FV15-01-01-Freitas.pdf\n     + 对几种分类模型的可解释性进行了有趣的讨论\n       （决策树、分类规则、决策表、最近邻以及贝叶斯网络分类器）\n\n   + (2015) 使用规则和贝叶斯分析的可解释分类器：构建更好的中风预测模型，作者为本杰明·莱瑟姆、辛西娅·鲁丁、泰勒·H·麦考密克、大卫·马迪根\n     + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1511.01644\n     + https:\u002F\u002Fdx.doi.org\u002F10.1214\u002F15-AOAS848\n\n   + (2017) 从噪声数据中学习解释性规则，作者为理查德·埃文斯、爱德华·格雷芬斯特特\n     + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.04574\n\n   + (2019) 基于多层逻辑感知机和随机二值化的透明分类，作者为卓王、魏张、宁刘、建勇王\n     + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1912.04695\n     + 代码：https:\u002F\u002Fgithub.com\u002F12wang3\u002Fmllp\n\n** 特征重要性\n   + 提供特征重要性度量的模型\n     + 随机森林\n     + 提升树\n     + 极端随机树\n       + (2006) 皮埃尔·热尔茨、达米安·恩斯特、路易·韦亨凯尔的《极端随机树》\n         + https:\u002F\u002Fdx.doi.org\u002F10.1007\u002Fs10994-006-6226-1\n     + 随机蕨类\n       + (2015) 米隆·B·库尔萨的《rFerns：用于通用机器学习的随机蕨类方法实现》\n         + https:\u002F\u002Fdx.doi.org\u002F10.18637\u002Fjss.v061.i10\n         + https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002FrFerns\n         + https:\u002F\u002Fnotabug.org\u002Fmbq\u002FrFerns\n     + 线性回归（需谨慎对待）\n\n   + (2007) 卡罗琳·施特罗布勒、安妮-洛尔·布勒斯泰克斯、阿希姆·蔡莱斯、托斯滕·霍索恩的《随机森林变量重要性度量中的偏差：说明、来源及解决方案》\n     + https:\u002F\u002Fdx.doi.org\u002F10.1186\u002F1471-2105-8-25\n\n   + (2008) 卡罗琳·施特罗布勒、安妮-洛尔·布勒斯泰克斯、托马斯·克奈布、托马斯·奥古斯丁、阿希姆·蔡莱斯的《随机森林的条件变量重要性》\n     + https:\u002F\u002Fdx.doi.org\u002F10.1186\u002F1471-2105-9-307\n\n   + (2018) 亚伦·费舍尔、辛西娅·鲁丁、弗朗切斯卡·多米尼奇的《模型类别依赖性：来自“拉什莫恩”视角的任何机器学习模型类别的变量重要性度量》\n     + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1801.01489\n     + https:\u002F\u002Fgithub.com\u002Faaronjfisher\u002Fmcr\n     + 通用的（与模型无关的）变量重要性度量\n\n   + (2019) 吉尔斯·胡克、卢卡斯·门奇的《请停止打乱特征：解释与替代方案》\n     + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1905.03151\n     + 一篇倡导反对通过打乱特征来评估重要性的论文\n\n   + (2018) 朱塞佩·卡萨利基奥、克里斯托夫·莫尔纳尔、伯恩德·比施尔的《黑盒模型的特征重要性可视化》\n     + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.06620\n     + https:\u002F\u002Fgithub.com\u002Fgiuseppec\u002FfeatureImportance\n     + 全局和局部的（与模型无关的）变量重要性度量（基于模型依赖性）\n\n   + 一篇非常好的博客文章，描述了随机森林特征重要性的不足以及置换重要性\n     + http:\u002F\u002Fexplained.ai\u002Frf-importance\u002Findex.html\n\n   + 在Eli5文档中介绍了置换重要性——一种简单的与模型无关的方法\n     + https:\u002F\u002Feli5.readthedocs.io\u002Fen\u002Flatest\u002Fblackbox\u002Fpermutation_importance.html\n\n** 特征选择\n   + 特征选择方法的分类\n     + 过滤法\n     + 包装法\n     + 嵌入式方法\n\n   + (2003) 伊莎贝尔·居永、安德烈·埃利塞夫的《变量与特征选择导论》\n     + http:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume3\u002Fguyon03a\u002Fguyon03a.pdf\n     + 务必阅读这篇非常具有说明性的特征选择入门文章\n\n   + 过滤法\n\n     + (2006) 帕特里克·迈耶、詹卢卡·邦坦皮的《在癌症分类中利用变量互补性进行特征选择》\n       + https:\u002F\u002Fdx.doi.org\u002F10.1007\u002F11732242_9\n       + https:\u002F\u002Fpdfs.semanticscholar.org\u002Fd72f\u002Ff5063520ce4542d6d9b9e6a4f12aafab6091.pdf\n       + 引入了信息论方法——双输入对称相关性（DISR）\n\n     + (2012) 加文·布朗、亚当·波科克、赵明杰、米克尔·卢汉的《条件似然最大化：信息论特征选择的统一框架》\n       + http:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume13\u002Fbrown12a\u002Fbrown12a.pdf\n       + 代码：https:\u002F\u002Fgithub.com\u002FCraigacp\u002FFEAST\n       + 讨论了基于互信息的各种方法（MIM、mRMR、MIFS、CMIM、JMI、DISR、ICAP、CIFE、CMI）\n\n     + (2012) 亚当·波科克的《基于联合似然的特征选择》\n       + http:\u002F\u002Fwww.cs.man.ac.uk\u002F~gbrown\u002Fpublications\u002FpocockPhDthesis.pdf\n\n     + (2017) 瑞安·J·厄巴诺维奇、梅丽莎·米克尔、威廉·拉卡瓦、兰德尔·S·奥尔森、杰森·H·摩尔的《基于Relief的特征选择：介绍与综述》\n       + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.08421\n\n     + (2017) 瑞安·J·厄巴诺维奇、兰德尔·S·奥尔森、彼得·施密特、梅丽莎·米克尔、杰森·H·摩尔的《生物信息学数据挖掘中基于Relief的特征选择方法的基准测试》\n       + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.08477\n\n   + 包装法\n\n     + (2015) 米隆·B·库尔萨、维托尔德·R·鲁德尼基的《使用BorutaPackage进行特征选择》\n       + https:\u002F\u002Fdx.doi.org\u002F10.18637\u002Fjss.v036.i11\n       + https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002FBoruta\u002F\n       + 官方R语言代码：https:\u002F\u002Fnotabug.org\u002Fmbq\u002FBoruta\u002F\n       + Python代码：https:\u002F\u002Fgithub.com\u002Fscikit-learn-contrib\u002Fboruta_py\n\n     + Boruta适合赶时间的人\n       + https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002FBoruta\u002Fvignettes\u002Finahurry.pdf\n\n   + 一般\n\n     + (1994) 乔治·约翰、罗恩·科哈维、卡尔·普弗勒格的《无关特征与子集选择问题》\n       + https:\u002F\u002Fpdfs.semanticscholar.org\u002Fa83b\u002Fddb34618cc68f1014ca12eef7f537825d104.pdf\n       + 一篇经典的论文，讨论了弱相关特征、无关特征和强相关特征\n\n     + (2003) JMLR关于特征选择的专刊——比较旧（2003年）\n       + http:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fspecial\u002Ffeature03.html\n\n     + (2004) 伊莎贝尔·居永、史蒂夫·冈恩、阿萨·本-赫尔、吉迪昂·德罗尔的《NIPS 2003特征选择挑战赛结果分析》\n       + 论文：https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F2728-result-analysis-of-the-nips-2003-feature-selection-challenge.pdf\n       + 网站：http:\u002F\u002Fclopinet.com\u002Fisabelle\u002FProjects\u002FNIPS2003\u002F\n\n+ (2007) 多项式时间内用于模式识别的一致性特征选择，作者：罗兰·尼尔松、何塞·佩尼亚、约翰·比约克格伦、耶斯珀·特格纳\n       + http:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume8\u002Fnilsson07a\u002Fnilsson07a.pdf\n       + 讨论了特征选择中的最小最优方法与全相关方法\n\n   + 特征工程与选择，作者：库恩和约翰逊\n     + 略微偏离主题，但是一本非常有趣的书\n     + http:\u002F\u002Fwww.feat.engineering\u002Findex.html\n     + https:\u002F\u002Fbookdown.org\u002Fmax\u002FFES\u002F\n     + https:\u002F\u002Fgithub.com\u002Ftopepo\u002FFES\n\n   + H. J. van Veen 的特征工程演示文稿\n     + 略微偏离主题，但是一套非常有趣的幻灯片\n     + 幻灯片：https:\u002F\u002Fwww.slideshare.net\u002FHJvanVeen\u002Ffeature-engineering-72376750\n\n** 模型解释\n*** 哲学\n    + 《磁铁》——理查德·费曼著\n      https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=wMFPe-DwULM\n\n    + (2002) 走进黑箱：利奥·布雷伊曼的演讲\n      + https:\u002F\u002Fwww.stat.berkeley.edu\u002Fusers\u002Fbreiman\u002Fwald2002-2.pdf\n\n    + (2011) 解释还是预测？——加利特·舒梅利著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1101.0891\n      + https:\u002F\u002Fdx.doi.org\u002F10.1214\u002F10-STS330\n\n    + (2016) 模型可解释性的神话——扎卡里·C·利普顿著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1606.03490\n      + https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=mvzBQci04qA\n\n    + (2017) 朝着严谨的可解释机器学习科学迈进——菲娜莱·多希-维莱兹、彬·金著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1702.08608\n\n    + (2017) 人类评估在模型可解释性中的承诺与风险——伯妮丝·赫尔曼著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.07414\n\n    + (2018) 《为什么：因果关系的新科学》——朱迪亚·珀尔著\n      + http:\u002F\u002Fbayes.cs.ucla.edu\u002FWHY\u002Fwhy-intro.pdf\n\n    + (2018) 请停止做“可解释”的机器学习——辛西娅·鲁丁著\n      + 视频（从17分30秒开始，时长10分钟）：https:\u002F\u002Fzoom.us\u002Frecording\u002Fplay\u002F0y-iI9HamgyDzzP2k_jiTu6jB7JgVVXnjWZKDMbnyRTn3FsxTDZy6Wkrj3_ekx4J\n      + 链接：https:\u002F\u002Fusers.cs.duke.edu\u002F~cynthia\u002Fmediatalks.html\n\n    + (2018) 解释解释：一种评估机器学习可解释性的方法——莱拉尼·H·吉尔平、大卫·鲍、本·Z·袁、阿耶莎·巴杰瓦、迈克尔·斯佩克特、拉拉娜·卡加尔著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1806.00069\n\n    + (2019) 可解释的机器学习：定义、方法与应用——W·詹姆斯·默多克、钱丹·辛格、卡尔·昆比尔、雷扎·阿巴西-阿斯勒、宾·俞著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1901.04592\n\n    + (2019) 关于可解释机器学习的误解：更以人为本的机器学习——帕特里克·霍尔著\n      + https:\u002F\u002Fgithub.com\u002Fjphall663\u002Fxai_misconceptions\u002Fblob\u002Fmaster\u002Fxai_misconceptions.pdf\n      + https:\u002F\u002Fgithub.com\u002Fjphall663\u002Fxai_misconceptions\n\n    + (2019) 机器学习可解释性导论：公平、问责、透明与可解释AI的应用视角——帕特里克·霍尔和纳夫迪普·吉尔著\n      + https:\u002F\u002Fwww.h2o.ai\u002Fwp-content\u002Fuploads\u002F2019\u002F08\u002FAn-Introduction-to-Machine-Learning-Interpretability-Second-Edition.pdf\n\n*** 模型无关的解释\n    + (2009) 如何解释个体分类决策——大卫·贝伦斯、蒂蒙·施罗特、斯特凡·哈梅林、川边元明、卡佳·汉森、克劳斯-罗伯特·穆勒著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F0912.1128\n\n    + (2013) 探索黑箱：用个体条件期望图可视化统计学习——亚历克斯·戈德斯坦、亚当·卡佩尔纳、贾斯汀·布莱奇、埃米尔·皮特金著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1309.6392\n\n    + (2016) “我为什么要信任你？”：解释任何分类器的预测——马尔科·图利奥·里贝罗、萨米尔·辛格、卡洛斯·盖斯特林著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1602.04938\n      + 代码：https:\u002F\u002Fgithub.com\u002Fmarcotcr\u002Flime\n      + https:\u002F\u002Fgithub.com\u002Fmarcotcr\u002Flime-experiments\n      + https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=bCgEP2zuYxI\n      + 提出了LIME方法（局部可解释的模型无关解释）\n\n    + (2016) 模型解释系统：最新更新与扩展——瑞安·特纳著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1606.09517\n      + http:\u002F\u002Fwww.blackboxworkshop.org\u002Fpdf\u002FTurner2015_MES.pdf\n\n    + (2017) 通过影响函数理解黑箱预测——庞伟·科和珀西·梁著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1703.04730\n\n    + (2017) 一种统一的模型预测解释方法——斯科特·伦德伯格、苏-因·李著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.07874\n      + 代码：https:\u002F\u002Fgithub.com\u002Fslundberg\u002Fshap\n      + 提出了SHAP方法（Shapley Additive Explanations），是对LIME方法的推广\n\n    + (2018) 锚定：高精度的模型无关解释——马尔科·里贝罗、萨米尔·辛格、卡洛斯·盖斯特林著\n      + https:\u002F\u002Fhomes.cs.washington.edu\u002F~marcotcr\u002Faaai18.pdf\n      + 代码：https:\u002F\u002Fgithub.com\u002Fmarcotcr\u002Fanchor-experiments\n\n    + (2018) 学习解释：基于信息论的模型解释视角——简博·陈、乐松、马丁·J·韦恩赖特、迈克尔·I·乔丹著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1802.07814\n\n    + (2018) 使用live和breakDown软件包解释模型预测——马特乌什·斯塔尼亚克、普热米斯瓦夫·别切克著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1804.01955\n      + 文档：https:\u002F\u002Fmi2datalab.github.io\u002Flive\u002F\n      + 代码：https:\u002F\u002Fgithub.com\u002FMI2DataLab\u002Flive\n      + 文档：https:\u002F\u002Fpbiecek.github.io\u002FbreakDown\n      + 代码：https:\u002F\u002Fgithub.com\u002Fpbiecek\u002FbreakDown\n\n    + (2018) 一本综述书籍——《可解释的机器学习：让黑箱模型变得可解释的指南》，作者：克里斯托夫·莫尔纳尔\n      + https:\u002F\u002Fchristophm.github.io\u002Finterpretable-ml-book\u002F\n    + (2018) 停止为高风险决策解释黑箱机器学习模型，转而使用可解释模型——辛西娅·鲁丁著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.10154\n    + (2019) 通过功能分解量化任意机器学习模型的可解释性——克里斯托夫·莫尔纳尔、朱塞佩·卡萨利奇奥、伯恩德·比施尔著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1904.03867\n\n*** 模型特定的解释——神经网络\n    + (2013) 可视化与理解卷积网络——马修·D·齐勒、罗布·弗格斯著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1311.2901\n\n    + (2013) 深入卷积网络：图像分类模型与显著性图的可视化——卡伦·西蒙扬、安德烈亚·韦达尔迪、安德鲁·齐瑟曼著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1312.6034\n\n    + (2015) 通过深度可视化理解神经网络——杰森·约辛斯基、杰夫·克鲁恩、安·阮、托马斯·福克斯、霍德·利普森著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1506.06579\n      + https:\u002F\u002Fgithub.com\u002Fyosinski\u002Fdeep-visualization-toolbox\n\n    + (2016) Grad-CAM：基于梯度定位的深度网络可视化解释——拉姆普拉萨特·R·塞尔瓦拉朱、迈克尔·科格斯威尔、阿比舍克·达斯、拉马克里希纳·韦丹塔姆、黛薇·帕里克、德鲁夫·巴特拉著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1610.02391\n\n    + (2016) 生成视觉解释——丽莎·安·亨德里克斯、泽内普·阿卡塔、马库斯·罗尔巴赫、杰夫·多纳休、伯恩特·席勒、特雷弗·达雷尔著\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1603.08507\n\n+ (2016) 通过理性化神经网络预测——陶磊、雷吉娜·巴尔齐莱、汤米·雅科拉\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1606.04155\n      + https:\u002F\u002Fpeople.csail.mit.edu\u002Ftaolei\u002Fpapers\u002Femnlp16_rationale_slides.pdf\n      + 代码：https:\u002F\u002Fgithub.com\u002Ftaolei87\u002Frcnn\u002Ftree\u002Fmaster\u002Fcode\u002Frationale\n\n    + (2016) 反事实梯度——穆昆德·桑达拉贾恩、安库尔·塔利、齐奇·颜\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.02639\n\n    + 像素熵可用于检测相关图像区域（适用于卷积神经网络）\n      + 参见论文的可视化部分及图5\n        + (2017) 高分辨率乳腺癌筛查：基于多视角深度卷积神经网络——克日什托夫·J·格拉斯、斯泰西·沃尔夫森、沈一秋、吴楠、S·吉恩·金、埃里克·金、劳拉·希考克、乌贾斯·帕里克、琳达·莫伊、崔炯炫\n          + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1703.07047\n\n    + (2017) SVCCA：用于深度学习动态与可解释性的奇异向量典型相关分析——迈特拉·拉古、贾斯汀·吉尔默、杰森·约辛斯基、雅莎·索尔-迪克斯坦\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1706.05806\n      + https:\u002F\u002Fresearch.googleblog.com\u002F2017\u002F11\u002Finterpreting-deep-neural-networks-with.html\n\n    + (2017) 通过解释进行视觉说明：提升深度神经网络的视觉反馈能力——何塞·奥拉马斯、王凯莉、蒂妮·图伊特拉尔斯\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1712.06302\n\n    + (2017) 深度网络的公理化归因——穆昆德·桑达拉贾恩、安库尔·塔利、齐奇·颜\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1703.01365\n      + 代码：https:\u002F\u002Fgithub.com\u002Fankurtaly\u002FIntegrated-Gradients\n      + 提出了集成梯度方法\n      + 另请参阅：反事实梯度 https:\u002F\u002Farxiv.org\u002Fpdf\u002F1611.02639.pdf\n\n    + (2017) 通过传播激活差异学习重要特征——阿万提·施里库马尔、佩顿·格林赛德、安舒尔·昆达杰\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1704.02685\n\n      + 提出了DeepLift方法\n\n      + 代码：https:\u002F\u002Fgithub.com\u002Fkundajelab\u002Fdeeplift\n\n      + 视频：https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLJLjQOkqSRTP3cLB2cOOi_bQFw6KPGKML\n\n    + (2017) 显著性方法的（不可）靠性——皮特-扬·金德曼斯、萨拉·胡克、朱利叶斯·阿德巴约、马克西米利安·阿尔伯、克里斯托夫·T·许特、斯文·代内、杜米特鲁·埃尔汉、彬·金\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.0867\n      + 回顾了用于提取对预测最重要像素的方法的失败案例\n\n    + (2018) 分类器无关的显著性图提取——孔拉德·佐尔纳、克日什托夫·J·格拉斯、崔炯炫\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1805.08249\n      + 代码：https:\u002F\u002Fgithub.com\u002Fkondiz\u002Fcasme\n\n    + (2018) 深度神经网络可解释性方法的基准——萨拉·胡克、杜米特鲁·埃尔汉、皮特-扬·金德曼斯、彬·金\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1806.10758\n\n    + (2018) 可解释性的构建模块——克里斯·奥拉、阿尔温德·萨蒂亚纳拉扬、伊恩·约翰逊、珊·卡特、路德维希·舒伯特、凯瑟琳·叶、亚历山大·莫尔德文采夫\n      + https:\u002F\u002Fdx.doi.org\u002F10.23915\u002Fdistill.00010\n      + 包含一些嵌入式笔记本链接\n      + 使用Lucid库 https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Flucid\n\n    + (2018) 神经网络预测的层次化解释——钱丹·辛格、W·詹姆斯·默多克、余斌\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1806.05337\n      + 代码：https:\u002F\u002Fgithub.com\u002Fcsinva\u002Fhierarchical_dnn_interpretations\n\n    + (2018) iNNvestigate神经网络！——马克西米利安·阿尔伯、塞巴斯蒂安·拉普施金、菲利普·泽格勒、米里亚姆·海格勒、克里斯托夫·T·许特、格雷戈瓦·蒙塔冯、沃伊切赫·萨梅克、克劳斯-罗伯特·穆勒、斯文·代内、皮特-扬·金德曼斯\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1808.04260\n      + 代码：https:\u002F\u002Fgithub.com\u002Falbermax\u002Finnvestigate\n\n    + (2018) YASENN：通过划分激活序列解释神经网络——亚罗斯拉夫·扎罗夫、丹尼斯·科尔任科夫、帕维尔·什韦奇科夫、亚历山大·图日林\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1811.02783\n\n    + (2019) 注意力并非解释——萨尔塔克·贾因、拜伦·C·华莱士\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1902.10186\n\n    + (2019) NLP任务中注意力机制的可解释性——希卡尔·瓦希斯特、夏亚姆·乌帕迪亚伊、高拉夫·辛格·托马尔、玛纳尔·法鲁基\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1909.11218\n\n    + (2019) GRACE：生成简洁且信息丰富的对比样本以解释神经网络模型的预测——泰·勒、王苏航、李东元\n      + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1911.02042\n      + 代码：https:\u002F\u002Fgithub.com\u002Flethaiq\u002FGRACE_KDD20\n\n** 从复杂模型中提取可解释模型\n\n   + (2017) 利用查询和反例从循环神经网络中提取自动机——盖尔·魏斯、约阿夫·戈德堡、埃兰·亚哈夫\n     + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.09576\n\n   + (2017) 将神经网络蒸馏为软决策树——尼古拉斯·弗罗斯特、杰弗里·欣顿\n     + https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.09784\n\n   + (2017) 利用透明模型蒸馏检测黑盒模型中的偏见——萨拉·谭、里奇·卡鲁阿纳、贾尔斯·胡克、尹·楼\n     + http:\u002F\u002Fwww.aies-conference.com\u002F2018\u002Fcontents\u002Fpapers\u002Fmain\u002FAIES_2018_paper_96.pdf\n\n** 模型可视化\n   + 可视化统计模型：摘下眼罩\n     + http:\u002F\u002Fhad.co.nz\u002Fstat645\u002Fmodel-vis.pdf\n\n   + 部分依赖图\n     + http:\u002F\u002Fscikit-learn.org\u002Fstable\u002Fauto_examples\u002Fensemble\u002Fplot_partial_dependence.html\n     + pdp：一个用于构建部分依赖图的R包\n       https:\u002F\u002Fjournal.r-project.org\u002Farchive\u002F2017\u002FRJ-2017-016\u002FRJ-2017-016.pdf\n       https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Fpdp\u002Findex.html\n\n   + ggfortify：统一接口，用于可视化流行R包的统计结果\n     + https:\u002F\u002Fjournal.r-project.org\u002Farchive\u002F2016-2\u002Ftang-horikoshi-li.pdf\n     + CRAN https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Fggfortify\u002Findex.html\n\n   + RandomForestExplainer\n     + 硕士论文 https:\u002F\u002Frawgit.com\u002FgeneticsMiNIng\u002FBlackBoxOpener\u002Fmaster\u002FrandomForestExplainer_Master_thesis.pdf\n     + R代码\n       + CRAN https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002FrandomForestExplainer\u002Findex.html\n       + 代码：https:\u002F\u002Fgithub.com\u002FMI2DataLab\u002FrandomForestExplainer\n\n   + ggRandomForest\n     + 论文（小插曲） https:\u002F\u002Fgithub.com\u002Fehrlinger\u002FggRandomForests\u002Fraw\u002Fmaster\u002Fvignettes\u002FrandomForestSRC-Survival.pdf\n     + R代码\n       + CRAN https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002FggRandomForests\u002Findex.html\n       + 代码：https:\u002F\u002Fgithub.com\u002Fehrlinger\u002FggRandomForests\n\n** 精选综述演讲与教程\n   + 2017年ICML会议上的可解释机器学习教程\n     + 幻灯片：http:\u002F\u002Fpeople.csail.mit.edu\u002Fbeenkim\u002Fpapers\u002FBeenK_FinaleDV_ICML2017_tutorial.pdf\n\n   + P. Biecek，《给我看看你的模型——统计模型可视化工具》\n     + 视频：https:\u002F\u002Fchannel9.msdn.com\u002FEvents\u002FuseR-international-R-User-conferences\u002FuseR-International-R-User-2017-Conference\u002FShow-Me-Your-Model-tools-for-visualisation-of-statistical-models\n\n   + S. Ritchie，《AI的“就这样”故事》\n     + 视频：https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=DiWkKqZChF0\n     + 幻灯片：https:\u002F\u002Fspeakerdeck.com\u002Fsritchie\u002Fjust-so-stories-for-ai-explaining-black-box-predictions\n\n+ C. Jarmul，《迈向可解释且负责任的模型》\n     + 视频：https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=B3PtcF-6Dtc\n     + 幻灯片：https:\u002F\u002Fdocs.google.com\u002Fpresentation\u002Fd\u002Fe\u002F2PACX-1vR05kpagAbL5qo1QThxwu44TI5SQAws_UFVg3nUAmKp39uNG0xdBjcMA-VyEeqZRGGQtt0CS5h2DMTS\u002Fembed?start=false&loop=false&delayms=3000\n\n   + I. Oszvald，《你真该早点知道的机器学习库》\n     + 演讲的很大一部分内容涉及模型解释与可视化\n     + 视频：https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=nDF7_8FOhpI\n     + 相关的回归预测解释笔记本：https:\u002F\u002Fgithub.com\u002Fianozsvald\u002Fdata_science_delivered\u002Fblob\u002Fmaster\u002Fml_explain_regression_prediction.ipynb\n\n   + G. Varoquaux，《理解与诊断你的机器学习模型》（涵盖 PDP 和 Lime 等方法）\n     + 视频：https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=kbj3llSbaVA\n     + 幻灯片：http:\u002F\u002Fgael-varoquaux.info\u002Finterpreting_ml_tuto\u002F\n\n** 场所\n   + 可解释机器学习研讨会（NIPS 2017）（包含*论文*、*幻灯片*和*视频*的链接）\n     + http:\u002F\u002Finterpretable.ml\u002F\n     + 辩论：机器学习中可解释性是否必要\n       + https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=2hW05ZfsUUo\n   + 机器学习中人类可解释性研讨会（WHI），与 ICML 联合举办\n     + 2018 年（包含*论文*和*幻灯片*的链接）\n       + https:\u002F\u002Fsites.google.com\u002Fview\u002Fwhi2018\n       + 会议论文集 https:\u002F\u002Farxiv.org\u002Fhtml\u002F1807.01308\n     + 2017 年（包含*论文*和*幻灯片*的链接）\n       + https:\u002F\u002Fsites.google.com\u002Fview\u002Fwhi2017\u002Fhome\n       + 会议论文集 https:\u002F\u002Farxiv.org\u002Fhtml\u002F1708.02666\n     + 2016 年（包含*论文*的链接）\n       + https:\u002F\u002Fsites.google.com\u002Fsite\u002F2016whi\u002F\n       + 会议论文集 https:\u002F\u002Farxiv.org\u002Fhtml\u002F1607.02531 或 [[https:\u002F\u002Fdrive.google.com\u002Fopen?id=0B9mGJ4F63iKGZWk0cXZraTNjRVU][此处]]\n   + 针对自然语言处理的神经网络分析与解释研讨会（BlackboxNLP），与 EMNLP 联合举办\n     + 2019 年（以下链接未来可能会加上 2019 的前缀）\n       + https:\u002F\u002Fblackboxnlp.github.io\u002F\n       + https:\u002F\u002Fblackboxnlp.github.io\u002Fprogram.html\n       + 论文应在 arXiv 上可查\n     + 2018 年\n       + https:\u002F\u002Fblackboxnlp.github.io\u002F2018\n       + https:\u002F\u002Fblackboxnlp.github.io\u002Fprogram.html\n       + [[https:\u002F\u002Farxiv.org\u002Fsearch\u002Fadvanced?advanced=&terms-0-operator=AND&terms-0-term=BlackboxNLP&terms-0-field=comments&terms-1-operator=OR&terms-1-term=Analyzing+interpreting+neural+networks+NLP&terms-1-field=comments&classification-physics_archives=all&date-filter_by=all_dates&date-year=&date-from_date=&date-to_date=&date-date_type=submitted_date&abstracts=show&size=200&order=-announced_date_first][论文列表]]\n   + FAT\u002FML 机器学习中的公平性、问责制与透明度[[https:\u002F\u002Fwww.fatml.org\u002F]]\n     + 2018 年\n       + https:\u002F\u002Fwww.fatml.org\u002Fschedule\u002F2018\n     + 2017 年\n       + https:\u002F\u002Fwww.fatml.org\u002Fschedule\u002F2017\n     + 2016 年\n       + https:\u002F\u002Fwww.fatml.org\u002Fschedule\u002F2016\n     + 2016 年\n       + https:\u002F\u002Fwww.fatml.org\u002Fschedule\u002F2016\n     + 2015 年\n       + https:\u002F\u002Fwww.fatml.org\u002Fschedule\u002F2015\n     + 2014 年\n       + https:\u002F\u002Fwww.fatml.org\u002Fschedule\u002F2014\n    + AAAI\u002FACM 人工智能、伦理与社会年度会议\n      + 2019 年（以下链接未来可能会加上 2019 的前缀）\n        + http:\u002F\u002Fwww.aies-conference.com\u002Faccepted-papers\u002F\n      + 2018 年\n        + http:\u002F\u002Fwww.aies-conference.com\u002F2018\u002Faccepted-papers\u002F\n        + http:\u002F\u002Fwww.aies-conference.com\u002F2018\u002Faccepted-student-papers\u002F\n** 软件\n   每篇论文都会提到与其相关的软件。此处仅列出独立的软件。\n\n   + DALEX - R 包，用于描述性机器学习解释\n     + CRAN https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002FDALEX\u002FDALEX.pdf\n     + 代码：https:\u002F\u002Fgithub.com\u002Fpbiecek\u002FDALEX\n\n   + ELI5 - Python 包，专门用于调试机器学习分类器并解释其预测结果\n     + 代码：https:\u002F\u002Fgithub.com\u002FTeamHG-Memex\u002Feli5\n     + https:\u002F\u002Feli5.readthedocs.io\u002Fen\u002Flatest\u002F\n\n   + forestmodel - R 包，使用所谓的森林图可视化不同模型的系数\n     + CRAN https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Fforestmodel\u002Findex.html\n     + 代码：https:\u002F\u002Fgithub.com\u002FNikNakk\u002Fforestmodel\n\n   + fscaret - R 包，提供来自 'caret' 的自动化特征选择功能\n     + CRAN https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Ffscaret\u002F\n     + 教程：https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Ffscaret\u002Fvignettes\u002Ffscaret.pdf\n\n   + iml - R 包，用于可解释的机器学习\n     + CRAN https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Fiml\u002F\n     + 代码：https:\u002F\u002Fgithub.com\u002FchristophM\u002Fiml\n     + 出版物：http:\u002F\u002Fjoss.theoj.org\u002Fpapers\u002F10.21105\u002Fjoss.00786\n\n   + interpret - Python 包，由微软开发，用于训练可解释模型并解释黑盒系统\n     + 代码：https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002Finterpret\n\n   + lime - R 包，实现了 LIME 方法\n     + https:\u002F\u002Fgithub.com\u002Fthomasp85\u002Flime\n\n   + lofo-importance - Python 包，基于“留一法”计算特征重要性\n     + 代码：https:\u002F\u002Fgithub.com\u002Faerdem4\u002Flofo-importance\n\n   + Lucid - 一套用于神经网络可解释性研究的基础设施和工具集合\n     + 代码：https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Flucid\n\n   + praznik - R 包，包含一系列特征选择过滤器，采用贪婪优化互信息为基础的有效性标准，参见 JMLR 13, 27−66 (2012)\n     + CRAN https:\u002F\u002Fcran.r-project.org\u002Fweb\u002Fpackages\u002Fpraznik\u002Findex.html\n     + 代码：https:\u002F\u002Fnotabug.org\u002Fmbq\u002Fpraznik\n\n   + yellowbrick - Python 包，提供可视化分析和诊断工具，以帮助选择机器学习模型\n     + 代码：https:\u002F\u002Fgithub.com\u002FDistrictDataLabs\u002Fyellowbrick\n     + http:\u002F\u002Fwww.scikit-yb.org\u002Fen\u002Flatest\u002F\n\n** 其他资源\n   + Patrick Hall 整理的“Awesome”资源列表\n     + https:\u002F\u002Fgithub.com\u002Fjphall663\u002Fawesome-machine-learning-interpretability\n   + Przemysław Biecek 整理的“Awesome” XAI 资源列表\n     + https:\u002F\u002Fgithub.com\u002Fpbiecek\u002Fxai_resources","# awesome-interpretable-machine-learning 快速上手指南\n\n`awesome-interpretable-machine-learning` 并非一个单一的 Python 包或可执行软件，而是一个**精选资源列表（Awesome List）**，汇集了关于可解释性机器学习（Interpretable ML）的论文、模型实现、教程和开源代码库。\n\n本指南旨在帮助开发者利用该列表中的资源，快速搭建可解释性机器学习环境并掌握核心工具（如 LIME, SHAP, Boruta 等）。\n\n## 环境准备\n\n由于列表中包含多种语言（主要是 Python 和 R）的实现，建议根据你计划使用的具体算法准备环境。以下以最常用的 **Python** 生态为例。\n\n*   **操作系统**: Linux, macOS, 或 Windows\n*   **Python 版本**: 推荐 Python 3.8 及以上\n*   **前置依赖**:\n    *   `pip` (Python 包管理工具)\n    *   `conda` (可选，推荐用于管理数据科学环境)\n    *   基础数据科学库：`numpy`, `pandas`, `scikit-learn`, `matplotlib`\n\n> **国内加速建议**：\n> 在安装依赖时，推荐使用清华源或阿里源以提升下载速度：\n> ```bash\n> pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple \u003Cpackage_name>\n> ```\n\n## 安装步骤\n\n由于这是一个资源列表，你需要安装列表中提到的具体工具库。以下是两个最主流、通用的可解释性工具库的安装命令：\n\n### 1. 安装 SHAP (SHapley Additive exPlanations)\n适用于全局和局部特征重要性分析，支持多种模型。\n\n```bash\npip install shap\n# 或使用国内镜像\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple shap\n```\n\n### 2. 安装 LIME (Local Interpretable Model-agnostic Explanations)\n适用于解释单个预测结果，支持图像、文本和表格数据。\n\n```bash\npip install lime\n# 或使用国内镜像\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple lime\n```\n\n### 3. 其他常用库 (可选)\n*   **Boruta** (特征选择): `pip install boruta` (注意：Python 版本通常为 `boruta_py`)\n*   **ELI5** (调试和可视化): `pip install eli5`\n*   **InterpretML** (微软出品，包含多种解释器): `pip install interpret`\n\n## 基本使用\n\n以下示例展示如何使用列表中推荐的 **SHAP** 库对一个简单的随机森林模型进行解释。这是理解“黑盒”模型最直接的入门方式。\n\n### 示例：使用 SHAP 解释随机森林模型\n\n```python\nimport shap\nimport sklearn\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.datasets import load_iris\nfrom sklearn.model_selection import train_test_split\n\n# 1. 准备数据\nX, y = load_iris(return_X_y=True)\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n\n# 2. 训练模型 (黑盒模型)\nmodel = RandomForestClassifier(n_estimators=100, random_state=42)\nmodel.fit(X_train, y_train)\n\n# 3. 创建 SHAP 解释器\n# TreeExplainer 专门针对树模型进行了优化，速度极快\nexplainer = shap.TreeExplainer(model)\n\n# 4. 计算 SHAP 值\nshap_values = explainer.shap_values(X_test)\n\n# 5. 可视化结果\n# 绘制摘要图，展示特征对模型输出的整体影响\nshap.summary_plot(shap_values, X_test, feature_names=load_iris().feature_names)\n\n# 绘制依赖图，查看单个特征与 SHAP 值的关系\nshap.dependence_plot(0, shap_values, X_test, feature_names=load_iris().feature_names)\n```\n\n### 进阶：使用 LIME 解释单个样本\n\n如果你需要解释**某一次特定预测**的原因（例如：为什么模型认为这张图片是猫？），可以使用 LIME：\n\n```python\nimport lime\nimport lime.lime_tabular\nimport numpy as np\n\n# 假设已有训练好的 model 和训练数据 X_train (numpy array)\n# 定义预测函数 (LIME 需要概率输出)\ndef predict_fn(x):\n    return model.predict_proba(x)\n\n# 创建 LIME 解释器\nexplainer = lime.lime_tabular.LimeTabularExplainer(\n    training_data=X_train,\n    feature_names=load_iris().feature_names,\n    class_names=['setosa', 'versicolor', 'virginica'],\n    mode='classification'\n)\n\n# 解释测试集中的第一个样本\nexp = explainer.explain_instance(X_test[0], predict_fn, num_features=4)\n\n# 在 Jupyter Notebook 中显示结果\nexp.show_in_notebook()\n# 或者打印文本结果\nprint(exp.as_list())\n```\n\n> **提示**：`awesome-interpretable-machine-learning` 列表中包含了大量论文链接（如 Friedman 的 Rule Ensembles, Rudin 的 Interpretable Classifiers 等）。若要复现这些特定算法，请访问 README 中对应的 GitHub 代码链接（如 `https:\u002F\u002Fgithub.com\u002F12wang3\u002Fmllp`）获取专用实现。","某金融风控团队正在构建信用卡欺诈检测模型，急需向监管机构证明算法决策的公平性与逻辑依据。\n\n### 没有 awesome-interpretable-machine-learning 时\n- 团队盲目选用高精度的黑盒模型（如深度神经网络），却无法解释为何拒绝特定用户的申请，面临合规审计风险。\n- 在筛选关键风险特征时，缺乏系统性的方法论指导，误用了存在偏差的特征重要性评估方式，导致遗漏核心欺诈指标。\n- 业务人员完全无法理解模型逻辑，只能机械执行预测结果，一旦模型出错难以快速定位原因，信任度极低。\n- 面对监管问询，只能提供晦涩的数学公式，无法生成人类可读的规则或可视化报告，沟通成本极高。\n\n### 使用 awesome-interpretable-machine-learning 后\n- 团队参考资源列表，转而采用贝叶斯规则集或可解释决策树等模型，在保持精度的同时直接输出“若交易地点异常且金额过大则拦截”的清晰规则。\n- 利用列表中推荐的无偏特征重要性评估方法（如 Model Reliance），精准识别出真正的欺诈驱动因子，剔除了无关噪声干扰。\n- 借助可视化工具将复杂的决策边界转化为直观图表，业务人员能轻松理解判罚逻辑，并主动参与模型优化迭代。\n- 直接引用成熟的学术论文与开源实现，快速生成符合监管要求的解释性报告，将合规审查周期从数周缩短至两天。\n\nawesome-interpretable-machine-learning 通过提供经过筛选的可解释性资源与方法论，帮助团队在确保模型性能的同时，彻底打破了算法黑盒，实现了技术落地与合规透明的双赢。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flopusz_awesome-interpretable-machine-learning_7a618ba0.png","lopusz","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Flopusz_cd70648c.jpg","http:\u002F\u002Flopusz.github.io","https:\u002F\u002Fgithub.com\u002Flopusz",[19,23],{"name":20,"color":21,"percentage":22},"Python","#3572A5",98.3,{"name":24,"color":25,"percentage":26},"Makefile","#427819",1.7,917,141,"2026-04-05T12:50:09",1,"","未说明",{"notes":34,"python":32,"dependencies":35},"该仓库是一个“意见性资源列表”（Opinionated list of resources），主要收集了关于可解释机器学习的论文、博客文章、演示文稿以及其他开源工具（如 LIME, SHAP, Boruta, rFerns 等）的链接。它本身不是一个可直接运行的软件工具或库，因此没有特定的操作系统、GPU、内存、Python 版本或依赖库要求。用户需根据列表中引用的具体子项目（例如 github.com\u002Fmarcotcr\u002Flime 或 github.com\u002Fslundberg\u002Fshap）去查阅各自的运行环境需求。",[],[37,38,39],"其他","开发框架","数据工具",[41,42,43,44,45,46,47],"interpretable-machine-learning","interpretable-ml","interpretable-ai","xai","explainable-ai","machine-learning","data-science",2,"ready","2026-03-27T02:49:30.150509","2026-04-19T09:15:04.546669",[],[],[55,66,74,83,91,100],{"id":56,"name":57,"github_repo":58,"description_zh":59,"stars":60,"difficulty_score":61,"last_commit_at":62,"category_tags":63,"status":49},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[64,38,65,39],"Agent","图像",{"id":67,"name":68,"github_repo":69,"description_zh":70,"stars":71,"difficulty_score":61,"last_commit_at":72,"category_tags":73,"status":49},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[38,65,64],{"id":75,"name":76,"github_repo":77,"description_zh":78,"stars":79,"difficulty_score":48,"last_commit_at":80,"category_tags":81,"status":49},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160411,"2026-04-18T23:33:24",[38,64,82],"语言模型",{"id":84,"name":85,"github_repo":86,"description_zh":87,"stars":88,"difficulty_score":48,"last_commit_at":89,"category_tags":90,"status":49},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[38,65,64],{"id":92,"name":93,"github_repo":94,"description_zh":95,"stars":96,"difficulty_score":48,"last_commit_at":97,"category_tags":98,"status":49},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[99,64,65,38],"插件",{"id":101,"name":102,"github_repo":103,"description_zh":104,"stars":105,"difficulty_score":48,"last_commit_at":106,"category_tags":107,"status":49},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[99,38]]