[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-Trusted-AI--AIF360":3,"tool-Trusted-AI--AIF360":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":78,"owner_location":78,"owner_email":79,"owner_twitter":80,"owner_website":78,"owner_url":81,"languages":82,"stars":99,"forks":100,"last_commit_at":101,"license":102,"difficulty_score":23,"env_os":103,"env_gpu":104,"env_ram":105,"env_deps":106,"category_tags":116,"github_topics":117,"view_count":138,"oss_zip_url":78,"oss_zip_packed_at":78,"status":16,"created_at":139,"updated_at":140,"faqs":141,"releases":172},1803,"Trusted-AI\u002FAIF360","AIF360","A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.","AIF360 是一款专为检测和缓解人工智能偏见而设计的开源工具包。在金融、医疗、招聘等关键领域，机器学习模型若存在数据偏差，可能导致不公平的决策结果。AIF360 旨在解决这一核心痛点，帮助开发者在 AI 应用的全生命周期中识别并修正此类问题。\n\n这款工具主要面向数据科学家、AI 研究人员以及关注算法伦理的开发团队。它提供了三大核心能力：首先，内置了全面的公平性指标体系，可量化评估数据集和模型是否存在歧视；其次，为这些复杂的指标提供通俗易懂的解释，降低理解门槛；最后，集成了十余种前沿的去偏算法（如优化预处理、对抗性去偏等），支持在数据预处理、模型训练及后处理不同阶段进行干预。\n\nAIF360 的独特亮点在于其强大的扩展性与多语言支持，同时提供 Python 和 R 版本，并能将实验室阶段的学术研究转化为实际可用的工程方案。无论是希望快速上手的初学者，还是需要进行深度定制的研究者，都能通过其丰富的教程、交互式体验及详细指南找到合适的解决方案。如果你正在构建需要高度公平性的 AI 系统，AIF360 将是值得信赖的得力助手。","# AI Fairness 360 (AIF360)\n\n[![Continuous Integration](https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Factions\u002Fworkflows\u002Fci.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Factions\u002Fworkflows\u002Fci.yml)\n[![Documentation](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTrusted-AI_AIF360_readme_6bf48b3e9a6d.png)](https:\u002F\u002Faif360.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n[![PyPI version](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Faif360.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Faif360)\n[![CRAN\\_Status\\_Badge](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTrusted-AI_AIF360_readme_1be8caef56dd.png)](https:\u002F\u002Fcran.r-project.org\u002Fpackage=aif360)\n\nThe AI Fairness 360 toolkit is an extensible open-source library containing techniques developed by the\nresearch community to help detect and mitigate bias in machine learning models throughout the AI application lifecycle. AI Fairness 360 package is available in both Python and R.\n\nThe AI Fairness 360 package includes\n1) a comprehensive set of metrics for datasets and models to test for biases,\n2) explanations for these metrics, and\n3) algorithms to mitigate bias in datasets and models.\nIt is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging\nas finance, human capital management, healthcare, and education. We invite you to use it and improve it.\n\nThe [AI Fairness 360 interactive experience](https:\u002F\u002Faif360.res.ibm.com\u002Fdata)\nprovides a gentle introduction to the concepts and capabilities. The [tutorials\nand other notebooks](.\u002Fexamples) offer a deeper, data scientist-oriented\nintroduction. The complete API is also available.\n\nBeing a comprehensive set of capabilities, it may be confusing to figure out\nwhich metrics and algorithms are most appropriate for a given use case. To\nhelp, we have created some [guidance\nmaterial](https:\u002F\u002Faif360.res.ibm.com\u002Fresources#guidance) that can be\nconsulted.\n\nWe have developed the package with extensibility in mind. This library is still\nin development. We encourage the contribution of your metrics, explainers, and\ndebiasing algorithms.\n\nGet in touch with us on [Slack](https:\u002F\u002Faif360.slack.com) (invitation\n[here](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Faif360\u002Fshared_invite\u002Fzt-5hfvuafo-X0~g6tgJQ~7tIAT~S294TQ))!\n\n\n## Supported bias mitigation algorithms\n\n* Optimized Preprocessing ([Calmon et al., 2017](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6988-optimized-pre-processing-for-discrimination-prevention))\n* Disparate Impact Remover ([Feldman et al., 2015](https:\u002F\u002Fdoi.org\u002F10.1145\u002F2783258.2783311))\n* Equalized Odds Postprocessing ([Hardt et al., 2016](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6374-equality-of-opportunity-in-supervised-learning))\n* Reweighing ([Kamiran and Calders, 2012](http:\u002F\u002Fdoi.org\u002F10.1007\u002Fs10115-011-0463-8))\n* Reject Option Classification ([Kamiran et al., 2012](https:\u002F\u002Fdoi.org\u002F10.1109\u002FICDM.2012.45))\n* Prejudice Remover Regularizer ([Kamishima et al., 2012](https:\u002F\u002Frd.springer.com\u002Fchapter\u002F10.1007\u002F978-3-642-33486-3_3))\n* Calibrated Equalized Odds Postprocessing ([Pleiss et al., 2017](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7151-on-fairness-and-calibration))\n* Learning Fair Representations ([Zemel et al., 2013](http:\u002F\u002Fproceedings.mlr.press\u002Fv28\u002Fzemel13.html))\n* Adversarial Debiasing ([Zhang et al., 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.07593))\n* Meta-Algorithm for Fair Classification ([Celis et al., 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.06055))\n* Rich Subgroup Fairness ([Kearns, Neel, Roth, Wu, 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.05144))\n* Exponentiated Gradient Reduction ([Agarwal et al., 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.02453))\n* Grid Search Reduction ([Agarwal et al., 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.02453), [Agarwal et al., 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.12843))\n* Fair Data Adaptation ([Plečko and Meinshausen, 2020](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fv21\u002F19-966.html), [Plečko et al., 2021](https:\u002F\u002Farxiv.org\u002Fabs\u002F2110.10200))\n* Sensitive Set Invariance\u002FSensitive Subspace Robustness ([Yurochkin and Sun, 2020](https:\u002F\u002Farxiv.org\u002Fabs\u002F2006.14168), [Yurochkin et al., 2019](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.00020))\n\n## Supported fairness metrics\n\n* Comprehensive set of group fairness metrics derived from selection rates and error rates including rich subgroup fairness\n* Comprehensive set of sample distortion metrics\n* Generalized Entropy Index ([Speicher et al., 2018](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3219819.3220046))\n* Differential Fairness and Bias Amplification ([Foulds et al., 2018](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.08362))\n* Bias Scan with Multi-Dimensional Subset Scan ([Zhang, Neill, 2017](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.08292))\n\n## Setup\n\n### R\n\n``` r\ninstall.packages(\"aif360\")\n```\n\nFor more details regarding the R setup, please refer to instructions [here](aif360\u002Faif360-r\u002FREADME.md).\n\n### Python\n\nSupported Python Configurations:\n\n| OS      | Python version |\n| ------- | -------------- |\n| macOS   | 3.8 – 3.11     |\n| Ubuntu  | 3.8 – 3.11     |\n| Windows | 3.8 – 3.11     |\n\n### (Optional) Create a virtual environment\n\nAIF360 requires specific versions of many Python packages which may conflict\nwith other projects on your system. A virtual environment manager is strongly\nrecommended to ensure dependencies may be installed safely. If you have trouble\ninstalling AIF360, try this first.\n\n#### Conda\n\nConda is recommended for all configurations though Virtualenv is generally\ninterchangeable for our purposes. [Miniconda](https:\u002F\u002Fconda.io\u002Fminiconda.html)\nis sufficient (see [the difference between Anaconda and\nMiniconda](https:\u002F\u002Fconda.io\u002Fdocs\u002Fuser-guide\u002Finstall\u002Fdownload.html#anaconda-or-miniconda)\nif you are curious) if you do not already have conda installed.\n\nThen, to create a new Python 3.11 environment, run:\n\n```bash\nconda create --name aif360 python=3.11\nconda activate aif360\n```\n\nThe shell should now look like `(aif360) $`. To deactivate the environment, run:\n\n```bash\n(aif360)$ conda deactivate\n```\n\nThe prompt will return to `$ `.\n\n### Install with `pip`\n\nTo install the latest stable version from PyPI, run:\n\n```bash\npip install aif360\n```\n\nNote: Some algorithms require additional dependencies (although the metrics will\nall work out-of-the-box). To install with certain algorithm dependencies\nincluded, run, e.g.:\n\n```bash\npip install 'aif360[LFR,OptimPreproc]'\n```\n\nor, for complete functionality, run:\n\n```bash\npip install 'aif360[all]'\n```\n\nThe options for available extras are: `OptimPreproc, LFR, AdversarialDebiasing,\nDisparateImpactRemover, LIME, ART, Reductions, FairAdapt, inFairness,\nLawSchoolGPA, notebooks, tests, docs, all`\n\nIf you encounter any errors, try the [Troubleshooting](#troubleshooting) steps.\n\n### Manual installation\n\nClone the latest version of this repository:\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\n```\n\nIf you'd like to run the examples, download the datasets now and place them in\ntheir respective folders as described in\n[aif360\u002Fdata\u002FREADME.md](aif360\u002Fdata\u002FREADME.md).\n\nThen, navigate to the root directory of the project and run:\n\n```bash\npip install --editable '.[all]'\n```\n\n#### Run the Examples\n\nTo run the example notebooks, complete the manual installation steps above.\nThen, if you did not use the `[all]` option, install the additional requirements\nas follows:\n\n```bash\npip install -e '.[notebooks]'\n```\n\nFinally, if you did not already, download the datasets as described in\n[aif360\u002Fdata\u002FREADME.md](aif360\u002Fdata\u002FREADME.md).\n\n### Troubleshooting\n\nIf you encounter any errors during the installation process, look for your\nissue here and try the solutions.\n\n#### TensorFlow\n\nSee the [Install TensorFlow with pip](https:\u002F\u002Fwww.tensorflow.org\u002Finstall\u002Fpip)\npage for detailed instructions.\n\nNote: we require `'tensorflow >= 1.13.1'`.\n\nOnce tensorflow is installed, try re-running:\n\n```bash\npip install 'aif360[AdversarialDebiasing]'\n```\n\nTensorFlow is only required for use with the\n`aif360.algorithms.inprocessing.AdversarialDebiasing` class.\n\n#### CVXPY\n\nOn MacOS, you may first have to install the Xcode Command Line Tools if you\nnever have previously:\n\n```sh\nxcode-select --install\n```\n\nOn Windows, you may need to download the [Microsoft C++ Build Tools for Visual\nStudio 2019](https:\u002F\u002Fvisualstudio.microsoft.com\u002Fthank-you-downloading-visual-studio\u002F?sku=BuildTools&rel=16).\nSee the [CVXPY Install](https:\u002F\u002Fwww.cvxpy.org\u002Finstall\u002Findex.html#mac-os-x-windows-and-linux)\npage for up-to-date instructions.\n\nThen, try reinstalling via:\n\n```bash\npip install 'aif360[OptimPreproc]'\n```\n\nCVXPY is only required for use with the\n`aif360.algorithms.preprocessing.OptimPreproc` class.\n\n## Using AIF360\n\nThe `examples` directory contains a diverse collection of jupyter notebooks\nthat use AI Fairness 360 in various ways. Both tutorials and demos illustrate\nworking code using AIF360. Tutorials provide additional discussion that walks\nthe user through the various steps of the notebook. See the details about\n[tutorials and demos here](examples\u002FREADME.md)\n\n## Citing AIF360\n\nA technical description of AI Fairness 360 is available in this\n[paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.01943). Below is the bibtex entry for this\npaper.\n\n```\n@misc{aif360-oct-2018,\n    title = \"{AI Fairness} 360:  An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias\",\n    author = {Rachel K. E. Bellamy and Kuntal Dey and Michael Hind and\n\tSamuel C. Hoffman and Stephanie Houde and Kalapriya Kannan and\n\tPranay Lohia and Jacquelyn Martino and Sameep Mehta and\n\tAleksandra Mojsilovic and Seema Nagar and Karthikeyan Natesan Ramamurthy and\n\tJohn Richards and Diptikalyan Saha and Prasanna Sattigeri and\n\tMoninder Singh and Kush R. Varshney and Yunfeng Zhang},\n    month = oct,\n    year = {2018},\n    url = {https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.01943}\n}\n```\n\n## AIF360 Videos\n\n* Introductory [video](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=X1NsrcaRQTE) to AI\n  Fairness 360 by Kush Varshney, September 20, 2018 (32 mins)\n\n## Contributing\nThe development fork for Rich Subgroup Fairness (`inprocessing\u002Fgerryfair_classifier.py`) is [here](https:\u002F\u002Fgithub.com\u002Fsethneel\u002Faif360). Contributions are welcome and a list of potential contributions from the authors can be found [here](https:\u002F\u002Ftrello.com\u002Fb\u002F0OwPcbVr\u002Fgerryfair-development).\n","# AI公平性360（AIF360）\n\n[![持续集成](https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Factions\u002Fworkflows\u002Fci.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Factions\u002Fworkflows\u002Fci.yml)\n[![文档](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTrusted-AI_AIF360_readme_6bf48b3e9a6d.png)](https:\u002F\u002Faif360.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n[![PyPI版本](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Faif360.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Faif360)\n[![CRAN_状态_徽章](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTrusted-AI_AIF360_readme_1be8caef56dd.png)](https:\u002F\u002Fcran.r-project.org\u002Fpackage=aif360)\n\nAI公平性360工具包是一个可扩展的开源库，包含由研究社区开发的技术，旨在帮助在整个AI应用生命周期中检测和缓解机器学习模型中的偏见。AI公平性360软件包同时提供Python和R两种语言的接口。\n\nAI公平性360软件包包括：\n1) 一套全面的数据集和模型偏差检测指标，\n2) 对这些指标的解释说明，\n3) 用于减轻数据集和模型中偏见的算法。\n它旨在将实验室中的算法研究成果转化为实际应用，覆盖金融、人力资本管理、医疗保健和教育等广泛领域。我们诚邀您使用并改进该工具包。\n\n[AIF360交互式体验](https:\u002F\u002Faif360.res.ibm.com\u002Fdata) 提供了对相关概念和功能的入门介绍。[教程及其他笔记本](.\u002Fexamples) 则提供了更深入、面向数据科学家的介绍。完整的API文档也可供查阅。\n\n由于该工具包功能全面，用户可能难以确定在特定应用场景下应选择哪些指标和算法。为此，我们准备了一些[指导材料](https:\u002F\u002Faif360.res.ibm.com\u002Fresources#guidance)，可供参考。\n\n我们在设计该工具包时充分考虑了其可扩展性。目前该库仍在开发中，我们鼓励大家贡献自己的指标、解释器和去偏算法。\n\n欢迎通过[Slack](https:\u002F\u002Faif360.slack.com)与我们联系（邀请链接[此处](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Faif360\u002Fshared_invite\u002Fzt-5hfvuafo-X0~g6tgJQ~7tIAT~S294TQ)）！\n\n## 支持的偏见缓解算法\n\n* 优化预处理（Calmon等，2017）\n* 差异化影响消除器（Feldman等，2015）\n* 平等机会后处理（Hardt等，2016）\n* 重加权法（Kamiran和Calders，2012）\n* 拒绝选项分类法（Kamiran等，2012）\n* 偏见去除正则化器（Kamishima等，2012）\n* 校准后的平等机会后处理（Pleiss等，2017）\n* 学习公平表示法（Zemel等，2013）\n* 对抗性去偏法（Zhang等，2018）\n* 公平分类元算法（Celis等，2018）\n* 丰富子组公平性（Kearns、Neel、Roth、Wu，2018）\n* 指数梯度约减法（Agarwal等，2018）\n* 网格搜索约减法（Agarwal等，2018；Agarwal等，2019）\n* 公平数据适应法（Plečko和Meinshausen，2020；Plečko等，2021）\n* 敏感集合不变性\u002F敏感子空间鲁棒性（Yurochkin和Sun，2020；Yurochkin等，2019）\n\n## 支持的公平性指标\n\n* 从选择率和错误率衍生出的一系列群体公平性指标，包括丰富子组公平性\n* 一系列样本扭曲度量指标\n* 广义熵指数（Speicher等，2018）\n* 差分公平性和偏见放大效应（Foulds等，2018）\n* 多维子集扫描偏见检测法（Zhang、Neill，2017）\n\n## 安装\n\n### R\n\n```r\ninstall.packages(\"aif360\")\n```\n\n有关R环境安装的更多详细信息，请参阅[此处](aif360\u002Faif360-r\u002FREADME.md)的说明。\n\n### Python\n\n支持的Python配置：\n\n| 操作系统   | Python版本 |\n| ---------- | ---------- |\n| macOS      | 3.8 – 3.11 |\n| Ubuntu     | 3.8 – 3.11 |\n| Windows    | 3.8 – 3.11 |\n\n### （可选）创建虚拟环境\n\nAIF360需要多个特定版本的Python包，这些依赖项可能会与您系统上的其他项目产生冲突。强烈建议使用虚拟环境管理器来确保依赖项能够安全安装。如果您在安装AIF360时遇到困难，请首先尝试此方法。\n\n#### Conda\n\n尽管Virtualenv通常也可以满足我们的需求，但我们推荐使用Conda。如果您尚未安装Conda，仅需[Miniconda](https:\u002F\u002Fconda.io\u002Fminiconda.html)即可（如果您想了解Anaconda和Miniconda的区别，可以参阅[此处](https:\u002F\u002Fconda.io\u002Fdocs\u002Fuser-guide\u002Finstall\u002Fdownload.html#anaconda-or-miniconda)）。然后，要创建一个新的Python 3.11环境，请运行：\n\n```bash\nconda create --name aif360 python=3.11\nconda activate aif360\n```\n\n此时终端提示符应显示为`(aif360) $`。要退出该环境，运行：\n\n```bash\n(aif360)$ conda deactivate\n```\n\n提示符将恢复为`$ `。\n\n### 使用pip安装\n\n要从PyPI安装最新稳定版本，运行：\n\n```bash\npip install aif360\n```\n\n注意：部分算法需要额外的依赖项（尽管大多数指标无需额外安装即可使用）。若需安装包含特定算法依赖的版本，例如：\n\n```bash\npip install 'aif360[LFR,OptimPreproc]'\n```\n\n或为了获得完整功能，运行：\n\n```bash\npip install 'aif360[all]'\n```\n\n可用的附加选项包括：`OptimPreproc, LFR, AdversarialDebiasing, DisparateImpactRemover, LIME, ART, Reductions, FairAdapt, inFairness, LawSchoolGPA, notebooks, tests, docs, all`。\n\n如果遇到任何问题，请尝试[故障排除](#troubleshooting)步骤。\n\n### 手动安装\n\n克隆该仓库的最新版本：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\n```\n\n如果您想运行示例，请立即下载数据集，并按照 [aif360\u002Fdata\u002FREADME.md](aif360\u002Fdata\u002FREADME.md) 中的说明将其放置到各自的文件夹中。\n\n然后，导航到项目根目录并运行：\n\n```bash\npip install --editable '.[all]'\n```\n\n#### 运行示例\n\n要运行示例笔记本，您需要先完成上述手动安装步骤。如果之前没有使用 `[all]` 选项，请按以下方式安装额外的依赖项：\n\n```bash\npip install -e '.[notebooks]'\n```\n\n最后，如果您尚未下载数据集，请按照 [aif360\u002Fdata\u002FREADME.md](aif360\u002Fdata\u002FREADME.md) 中的说明进行下载。\n\n### 故障排除\n\n如果在安装过程中遇到任何错误，请在此处查找您的问题并尝试相应的解决方案。\n\n#### TensorFlow\n\n请参阅 [通过 pip 安装 TensorFlow](https:\u002F\u002Fwww.tensorflow.org\u002Finstall\u002Fpip) 页面以获取详细说明。\n\n注意：我们要求 `'tensorflow >= 1.13.1'`。\n\n安装完 TensorFlow 后，尝试重新运行：\n\n```bash\npip install 'aif360[AdversarialDebiasing]'\n```\n\nTensorFlow 仅在使用 `aif360.algorithms.inprocessing.AdversarialDebiasing` 类时才需要。\n\n#### CVXPY\n\n在 macOS 上，如果您之前从未安装过 Xcode 命令行工具，可能需要先安装：\n\n```sh\nxcode-select --install\n```\n\n在 Windows 上，您可能需要下载 [适用于 Visual Studio 2019 的 Microsoft C++ 构建工具](https:\u002F\u002Fvisualstudio.microsoft.com\u002Fthank-you-downloading-visual-studio\u002F?sku=BuildTools&rel=16)。有关最新说明，请参阅 [CVXPY 安装页面](https:\u002F\u002Fwww.cvxpy.org\u002Finstall\u002Findex.html#mac-os-x-windows-and-linux)。\n\n然后，尝试通过以下命令重新安装：\n\n```bash\npip install 'aif360[OptimPreproc]'\n```\n\nCVXPY 仅在使用 `aif360.algorithms.preprocessing.OptimPreproc` 类时才需要。\n\n## 使用 AIF360\n\n`examples` 目录包含各种 Jupyter 笔记本，这些笔记本以不同方式使用 AI Fairness 360。教程和演示都展示了使用 AIF360 的实际代码。教程还提供了额外的讨论，引导用户逐步完成笔记本中的各个步骤。有关教程和演示的详细信息，请参阅 [examples\u002FREADME.md](examples\u002FREADME.md)。\n\n## 引用 AIF360\n\nAI Fairness 360 的技术描述可在该 [论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.01943) 中找到。以下是该论文的 BibTeX 条目。\n\n```\n@misc{aif360-oct-2018,\n    title = \"{AI Fairness} 360: 一个可扩展的工具包，用于检测、理解和缓解不必要的算法偏见\",\n    author = {Rachel K. E. Bellamy 和 Kuntal Dey 和 Michael Hind 和\n\tSamuel C. Hoffman 和 Stephanie Houde 和 Kalapriya Kannan 和\n\tPranay Lohia 和 Jacquelyn Martino 和 Sameep Mehta 和\n\tAleksandra Mojsilovic 和 Seema Nagar 和 Karthikeyan Natesan Ramamurthy 和\n\tJohn Richards 和 Diptikalyan Saha 和 Prasanna Sattigeri 和\n\tMoninder Singh 和 Kush R. Varshney 和 Yunfeng Zhang},\n    month = oct,\n    year = {2018},\n    url = {https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.01943}\n}\n```\n\n## AIF360 视频\n\n* Kush Varshney 于 2018 年 9 月 20 日制作的 AI Fairness 360 入门 [视频](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=X1NsrcaRQTE)（32 分钟）\n\n## 贡献\n\nRich Subgroup Fairness 的开发分支（`inprocessing\u002Fgerryfair_classifier.py`）位于 [这里](https:\u002F\u002Fgithub.com\u002Fsethneel\u002Faif360)。欢迎贡献，作者列出的潜在贡献清单可在 [这里](https:\u002F\u002Ftrello.com\u002Fb\u002F0OwPcbVr\u002Fgerryfair-development) 查看。","# AIF360 快速上手指南\n\nAI Fairness 360 (AIF360) 是一个可扩展的开源工具包，旨在帮助开发者在 AI 应用生命周期中检测和缓解机器学习模型中的偏见。它提供了丰富的公平性指标、解释说明以及去偏算法，支持 Python 和 R 语言。\n\n## 环境准备\n\n### 系统要求\nAIF360 支持以下操作系统及 Python 版本：\n\n| 操作系统 | Python 版本 |\n| :--- | :--- |\n| macOS | 3.8 – 3.11 |\n| Ubuntu \u002F Linux | 3.8 – 3.11 |\n| Windows | 3.8 – 3.11 |\n\n### 前置依赖\n由于 AIF360 依赖特定版本的多个 Python 包，强烈建议使用虚拟环境以避免与系统中其他项目冲突。推荐使用 **Conda** (Miniconda 即可)。\n\n部分高级算法（如对抗去偏、优化预处理）需要额外的底层依赖：\n*   **TensorFlow**: 用于 `AdversarialDebiasing` 算法。\n*   **CVXPY**: 用于 `OptimPreproc` 算法。\n    *   **macOS**: 可能需要先安装 Xcode Command Line Tools (`xcode-select --install`)。\n    *   **Windows**: 可能需要安装 [Microsoft C++ Build Tools](https:\u002F\u002Fvisualstudio.microsoft.com\u002Fthank-you-downloading-visual-studio\u002F?sku=BuildTools&rel=16)。\n\n## 安装步骤\n\n### 1. 创建并激活虚拟环境 (推荐)\n使用 Conda 创建名为 `aif360` 的 Python 3.11 环境：\n\n```bash\nconda create --name aif360 python=3.11\nconda activate aif360\n```\n\n### 2. 安装 AIF360\n\n#### 方案 A：安装基础版 (仅包含指标计算)\n如果只需使用公平性指标检测功能，可直接安装：\n\n```bash\npip install aif360\n# 国内用户推荐使用清华源加速\n# pip install aif360 -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n#### 方案 B：安装完整版 (包含所有去偏算法)\n若需使用全部去偏算法（如 LFR, OptimPreproc, AdversarialDebiasing 等），请安装完整依赖：\n\n```bash\npip install 'aif360[all]'\n# 国内用户推荐使用清华源加速\n# pip install 'aif360[all]' -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n> **注意**：如果安装过程中遇到编译错误，请确保已按照“环境准备”中的说明安装了 TensorFlow 或 CVXPY 所需的系统级构建工具。\n\n#### 方案 C：按需安装特定算法\n若只需特定算法以减少依赖体积，可指定 extras：\n\n```bash\n# 例如：仅安装 LFR 和 OptimPreproc 算法依赖\npip install 'aif360[LFR,OptimPreproc]'\n```\n\n## 基本使用\n\n以下是一个最简单的 Python 示例，演示如何加载数据集并计算常见的群体公平性指标（如统计奇偶差异）。\n\n```python\nfrom aif360.datasets import AdultDataset\nfrom aif360.metrics import BinaryLabelDatasetMetric\n\n# 1. 加载示例数据集 (Adult Dataset)\n# protected_attributes: 定义受保护属性 (如性别、种族)\n# privileged_groups: 定义特权组 (如男性、白人)\n# unprivileged_groups: 定义非特权组\ndataset = AdultDataset(\n    protected_attributes=['sex', 'race'],\n    privileged_groups=[{'sex': 1, 'race': 1}],\n    unprivileged_groups=[{'sex': 0, 'race': 0}]\n)\n\n# 2. 初始化指标计算器\nmetric = BinaryLabelDatasetMetric(dataset, \n                                  unprivileged_groups=[{'sex': 0, 'race': 0}],\n                                  privileged_groups=[{'sex': 1, 'race': 1}])\n\n# 3. 计算并打印公平性指标\nprint(f\"统计奇偶差异 (Statistical Parity Difference): {metric.statistical_parity_difference():.4f}\")\nprint(f\" disparate impact (Disparate Impact): {metric.disparate_impact():.4f}\")\nprint(f\"一致性比率 (Consistency): {metric.consistency():.4f}\")\n\n# 如果值为 0 (或接近 0)，表示在该指标下偏见较小；偏离 0 越多，偏见越大。\n```\n\n### 下一步\n*   **去偏处理**：查看 `aif360.algorithms` 模块，使用 `Reweighing`、`DisparateImpactRemover` 等算法对数据进行预处理或后处理。\n*   **详细教程**：安装完成后，可参考官方 `examples` 目录下的 Jupyter Notebooks 获取更深入的分类模型训练与去偏实战案例。","某大型银行的数据科学团队正在开发一套自动化信贷审批模型，旨在提高贷款发放效率并降低人工审核成本。\n\n### 没有 AIF360 时\n- **偏见难以量化**：团队仅凭直觉或简单的统计图表判断模型是否公平，缺乏统一的数学指标（如差异影响率、机会均等差）来科学定义“歧视”。\n- **修复靠盲目试错**：发现特定群体（如少数族裔或女性）通过率偏低后，只能手动调整特征权重或剔除敏感字段，往往导致模型整体准确率大幅下滑。\n- **合规风险不可控**：无法向监管机构提供具体的公平性审计报告，一旦模型上线引发舆论争议，将面临巨大的法律风险和品牌声誉损失。\n- **算法选择无依据**：面对多种去偏方法，团队不清楚该在数据预处理阶段介入，还是在模型训练或后处理阶段修正，缺乏系统性的指导。\n\n### 使用 AIF360 后\n- **全方位公平性诊断**：利用 AIF360 内置的几十种标准指标，团队快速量化了模型在不同人口统计学群体间的表现差异，精准定位了偏见来源。\n- **科学化去偏优化**：直接调用“重加权（Reweighing）”或“均衡几率后处理”等成熟算法，在将群体间通过率差异缩小至合规范围内的同时，保持了模型预测精度。\n- **生成可解释报告**：借助工具自带的解释模块，自动生成包含详细指标数据和缓解效果的审计报告，轻松满足金融监管的合规要求。\n- **全生命周期管理**：依据官方指南，团队建立了从数据清洗到模型部署的标准化公平性检测流程，确保新迭代版本不会引入新的偏见。\n\nAIF360 将抽象的伦理原则转化为可执行的代码流程，帮助企业在追求算法效率的同时守住公平底线。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FTrusted-AI_AIF360_2957497d.png","Trusted-AI","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FTrusted-AI_53a5ad53.png","This GitHub org hosts LF AI Foundation projects in the category of Trusted and Responsible AI.",null,"info@lfai.foundation","LFAI_Foundation","https:\u002F\u002Fgithub.com\u002FTrusted-AI",[83,87,91,95],{"name":84,"color":85,"percentage":86},"Python","#3572A5",95.6,{"name":88,"color":89,"percentage":90},"R","#198CE7",3.4,{"name":92,"color":93,"percentage":94},"Java","#b07219",0.9,{"name":96,"color":97,"percentage":98},"Dockerfile","#384d54",0,2786,907,"2026-04-03T18:59:08","Apache-2.0","macOS, Ubuntu (Linux), Windows","未说明（部分算法如 Adversarial Debiasing 依赖 TensorFlow，通常可运行于 CPU，若需 GPU 加速需自行配置兼容的 TensorFlow-GPU 环境）","未说明",{"notes":107,"python":108,"dependencies":109},"建议使用 Conda 创建虚拟环境以避免依赖冲突。核心指标功能可直接使用，但特定去偏算法（如优化预处理、对抗去偏等）需要安装额外的可选依赖（例如 tensorflow 或 cvxpy）。在 macOS 上安装 cvxpy 可能需要先安装 Xcode Command Line Tools；在 Windows 上可能需要安装 Microsoft C++ Build Tools。","3.8 – 3.11",[110,111,112,113,114,115],"tensorflow>=1.13.1 (可选，用于 Adversarial Debiasing)","cvxpy (可选，用于 Optimized Preprocessing)","scikit-learn","pandas","numpy","scipy",[53,13,51,55,14,26,15,52,54],[118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137],"ai","fairness-ai","fairness","fairness-testing","fairness-awareness-model","bias-detection","bias","bias-correction","bias-reduction","bias-finder","artificial-intelligence","discrimination","ibm-research-ai","ibm-research","machine-learning","deep-learning","codait","trusted-ai","r","python",4,"2026-03-27T02:49:30.150509","2026-04-06T11:30:51.073064",[142,147,152,157,162,167],{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},9577,"如何在 macOS 上解决安装 cvxpy 模块时的编译错误？","在 macOS 上安装 cvxpy 时遇到编译错误（如缺少 vector 头文件），可以尝试以下两种解决方案：\n1. 使用 Conda 安装特定旧版本：\n   $ conda create --name aios python=3.6\n   $ source activate aios\n   $ conda install -c cvxgrp cvxpy=0.4.9\n2. 安装最新版并修改代码兼容性：\n   首先安装最新版：$ conda install -c cvxgrp cvxpy\n   然后修改 opt_tools.py 文件，将导入语句从 'from cvxpy import ..., sum_entries, mul_elemwise, ...' 改为 'from cvxpy import ..., norm; import cvxpy as cvx'，并将代码中的 'sum_entries' 替换为 'cvx.sum'，'mul_elemwise' 替换为 'cvx.multiply'。","https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fissues\u002F44",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},9578,"为什么运行信用风险（Credit Risk）教程笔记本后生成的数据集偏差反而更大了？","该问题是由于教程代码过时导致的，维护者已确认并修复了教程内容。如果遇到此问题，请确保拉取最新的代码库。此外，用户也可以尝试使用更稳定的 Adult 数据集版本来进行实验，以避免此类不稳定性。","https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fissues\u002F51",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},9579,"AIF360 库是否与最新版本的 NumPy 兼容？","早期版本（如 0.5.0）可能与最新版的 NumPy 存在兼容性问题。维护者已在主分支中修复了该问题并发布新版本。如果您遇到兼容性问题，请升级 AIF360 到最新版本以支持与最新 NumPy 的无缝集成。","https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fissues\u002F500",{"id":158,"question_zh":159,"answer_zh":160,"source_url":161},9580,"重加权（Reweighing）算法是否只学习 4 个样本权重？这是预期行为吗？","默认的重加权实现确实只为二元分组（一个特权组和一个非特权组）学习 4 个权重。这是预期行为，因为该算法假设只有两类受保护属性值。如果您需要为每个组和类别的组合学习独立的权重（即支持多个子组），请使用 'aif360.sklearn.preprocessing' 模块中的重加权版本，该版本更符合论文描述的通用行为。","https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fissues\u002F429",{"id":163,"question_zh":164,"answer_zh":165,"source_url":166},9581,"在 R 语言中调用 load_aif360_lib() 时出现 'cannot change value of locked binding' 错误或导致会话崩溃怎么办？","该问题已通过代码修复和 README 文档中故障排除部分的更新得到解决。请确保您使用的是最新版本的 AIF360 R 包，并查阅 README 中的最新安装指南。如果仍在使用 reticulate 加载环境时崩溃，请检查是否正确配置了 conda 环境路径，并参考官方文档的最新建议。","https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fissues\u002F298",{"id":168,"question_zh":169,"answer_zh":170,"source_url":171},9582,"加载标准数据集（如 AdultDataset）时抛出 NotImplementedError 错误是什么原因？","此错误通常发生在旧版代码试图返回 ndarray 而新版接口已变更时。在较新的 AIF360 版本中，相关警告已升级为错误。解决方法是更新您的代码以适配新的 API，或者直接升级到最新的 AIF360 版本，其中已包含对标准数据集加载逻辑的修复。","https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fissues\u002F162",[173,178,183,188,193,198,203,208,213,218,223,228],{"id":174,"version":175,"summary_zh":176,"released_at":177},115963,"v0.6.1","# AIF360 v0.6.1 Release Notes\r\n\r\n## Highlights\r\n* New detector: `FACTS`\u002F`FACTS_bias_scan`\r\n\r\n## What's Changed\r\n* Fairness Aware Counterfactuals for Subgroups by @phantom-duck in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F457\r\n* Avoiding uses of np.float because that has been removed from the latest numpy. by @hirzel in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F521\r\n* Fixed Example Google Style Python Docstrings URL by @ShorthillsAI in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F522\r\n* Add FACTS import warnings by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F524\r\n* Bump version -> 0.6.1 by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F525\r\n\r\n## New Contributors\r\n* @phantom-duck made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F457\r\n* @hirzel made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F521\r\n* @ShorthillsAI made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F522\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fcompare\u002Fv0.6.0...v0.6.1","2024-04-08T20:02:55",{"id":179,"version":180,"summary_zh":181,"released_at":182},115964,"v0.6.0","# AIF360 v0.6.0 Release Notes\r\n\r\n## Highlights\r\n* New algorithms:\r\n    * `SenSeI`\u002F`SenSR`\r\n    * `DeterministicReranking`\r\n * New metric:\r\n    * `ot_distance`\r\n\r\n## Backwards-Incompatible Changes\r\n* Dropped support for `bias_scan` from `aif360.metrics`\u002F`aif360.sklearn.metrics`\r\n* Minor changes to MEPS files\r\n\r\n## What's Changed\r\n* Add python 3.10 for testing in ci.yml by @hakimamarullah in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F368\r\n* Allow binder to automatically build environment by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F395\r\n* Update to demo_lime notebook to run out of the box (and in Google collab) by @anupamamurthi in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F396\r\n* Fix error in metric_json_explainer.consistency() by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F400\r\n* Solving issue #381, to run demo_new_features notebook in Colab by @ivesulca in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F402\r\n* logic error in disparate_impact and statistical_parity_difference #292 by @sreeja-g in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F407\r\n* Modify example notebooks to work in Google colab #378 by @Yashaswini-Viswanath in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F405\r\n* Solving second part of issue #381 by @ivesulca in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F409\r\n* Get notebooks in examples\u002Fsklearn\u002F to work in Google colab (Part 2\u002F3) [#380]  by @dharmod in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F403\r\n* average predictive value difference metric implementation #376 by @sreeja-g in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F410\r\n* Add code coverage checks with `pytest-cov` by @aitorres in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F412\r\n* Initial inFairness algorithms (SenSeI\u002FSenSR) by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F340\r\n* Remove turtle module import by @gkumbhat in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F415\r\n* Contribution Guide by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F427\r\n* Bump versions by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F458\r\n* Fix typo in GridSearchReduction by @haas-christian in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F452\r\n* Add a bias detector based on optimal transport by @jmarecek in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F434\r\n* Issue 265 privileged class bank dataset by @joosjegoedhart in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F449\r\n* Fix typo. by @gowriaddepalli in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F470\r\n* Added methods for equalized odds difference by @divyagaddipati in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F477\r\n* Update data_preproc_functions.py to solve conversion to float issue by @baraldian in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F498\r\n* Add methods for dealing with fairness in rankings by @andrewklayk in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F461\r\n* Update homepage URL by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F503\r\n* Remove deprecated bias scan metrics by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F504\r\n* Replace deprecated `if_delegate_has_method` with `available_if` by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F511\r\n* Fix tests failing due to int columns by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F513\r\n* Change source for Law School GPA dataset by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F510\r\n* Install R dependencies by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F514\r\n* Update sphinx requirement by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F512\r\n* Bump jinja2 from 3.0.3 to 3.1.3 by @dependabot in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F507\r\n* add .readthedocs.yaml by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F516\r\n* Fix .readthedocs.yaml and bump version by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F517\r\n* Fix sphinx_rtd_theme by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F518\r\n* Rename master -> main by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F515\r\n* Remove `requests` dependency by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F519\r\n* Include `fairadapt.R` in package by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F520\r\n\r\n## New Contributors\r\n* @hakimamarullah made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F368\r\n* @anupamamurthi made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F396\r\n* @ivesulca made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F402\r\n* @sreeja-g made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F407\r\n* @Yashaswini-Viswanath made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F405\r\n* @dharmod made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F403\r\n* @aitorres made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F412\r\n* @gkumbhat made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F415\r\n* @haas-christian mad","2024-02-23T15:42:15",{"id":184,"version":185,"summary_zh":186,"released_at":187},115965,"v0.5.0","# AIF360 v0.5.0 Release Notes\r\n\r\n## Highlights\r\n* New algorithms:\r\n  * FairAdapt\r\n* New metrics:\r\n  * MDSS\r\n  * `class_imbalance`, `kl_divergence`, `conditional_demographic_disparity`\r\n  * `intersection` and `one_vs_rest` meta-metrics\r\n* sklearn-compatible ports:\r\n  * differential fairness metrics\r\n  * MEPS, COMPAS violent\r\n  * RejectOptionClassification, LearnedFairRepresentations\r\n\r\n## New Features\u002FImprovements\r\n* Multidimensional subset scanning (MDSS) for bias in classifiers by @Viktour19 in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F238\r\n* Update component.yaml to kfp v2 sdk by @yhwang in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F259\r\n* Fairadapt inclusion in AIF360 by @dplecko in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F257\r\n* Added a tutorial for advertising data by @barvek in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F310\r\n* More sklearn-compatible algorithms by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F318\r\n* Dataset Improvements by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F278\r\n  * array of sample-wise protected attributes may now be passed in `prot_attr` instead of an index label\r\n* Method of the month (July) by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F324\r\n* sklearn-compat additions by @mnagired in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F322\r\n  * add `predict_proba` to `RejectOptionClassifier`\r\n* More sklearn-compatible metrics by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F290\r\n  * `smoothed_edf`, `df_bias_amplification`\r\n  * `class_imbalance`, `kl_divergence`, `conditional_demographic_disparity`\r\n  * `intersection`, `one_vs_rest`\r\n\r\n## Backwards-Incompatible Changes\r\n* Add detectors api by @Adebayo-Oshingbesan in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F305\r\n  * version of `bias_scan` in `aif360.metrics` to be deprecated next release\r\n\r\n## Fixes\r\n* Fixed computation of coefficient of variation in classification_metrics by @plankington in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F288\r\n* Fix exponential gradient reduction without protected attribute (#267) by @jdnklau in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F268\r\n* Remove caches due to excessive memory use by @Adebayo-Oshingbesan in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F317\r\n* fix rpy2 crash bug by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F313\r\n* Fix pipelining bug in fairlearn algorithms by @hoffmansc in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F323\r\n* Optional tempeh, conditional imports by @DanielRyszkaIBM in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F338\r\n* Restricting AdversarialDebiasing's trainable variables to current scope by @mfeffer in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F255\r\n* Increasing max_iter to 1000 for LogisticRegression used in PrejudiceRemover by @mfeffer in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F254\r\n\r\n## New Contributors\r\n* @Viktour19 made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F238\r\n* @jdnklau made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F268\r\n* @yhwang made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F259\r\n* @dplecko made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F257\r\n* @plankington made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F288\r\n* @Adebayo-Oshingbesan made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F305\r\n* @barvek made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F310\r\n* @milevavantuyl made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F309\r\n* @josue-rodriguez made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F315\r\n* @DanielRyszkaIBM made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F338\r\n* @mnagired made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F322\r\n* @mfeffer made their first contribution in https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fpull\u002F255\r\n\r\n**Full Changelog**: https:\u002F\u002Fgithub.com\u002FTrusted-AI\u002FAIF360\u002Fcompare\u002Fv0.4.0...v0.5.0","2022-09-03T03:41:16",{"id":189,"version":190,"summary_zh":191,"released_at":192},115966,"v0.4.0","# AIF360 v0.4.0 Release Notes\r\n\r\nThis is a major release containing a number of new features, improvements, and bugfixes.\r\n\r\n## Highlights\r\n\r\n* TensorFlow 2, Python 3.8 now supported\r\n* New algorithms:\r\n  * Exponentiated Gradient Reduction\r\n  * Grid Search Reduction\r\n* New dataset:\r\n  * Law school GPA\r\n\r\n## New Features\u002FImprovements\r\n\r\n* Python 3.8 and TensorFlow 2 (via `compat.v1`) support added (#230)\r\n* Algorithms from fairlearn added (#215):\r\n  * Exponentiated Gradient Reduction and Grid Search Reduction\r\n  * Support for regression datasets\r\n  * Law school GPA dataset added\r\n* `MetaFairClassifier` code cleaned and sped up (#196)\r\n* removed numba dependency (#187)\r\n* `maxiter` and `maxfun` arguments in LFR `fit()` (#184)\r\n\r\n## Backwards-Incompatible Changes\r\n\r\n* Removed support for Python 3.5\r\n\r\n## Fixes\r\n\r\n* Fix bug where `scores` in a single-row dataset was getting squeezed (#193)\r\n* Typo in `consistency_score` documentation (#195)\r\n* Lime notebook license issue (#191)\r\n\r\n## New Contributors\r\n@baba-mpe, @SSaishruthi, @leenamurgai, @synapticarbors, @sohiniu, @yangky11 ","2021-03-04T18:01:29",{"id":194,"version":195,"summary_zh":196,"released_at":197},115967,"v0.3.0","# AIF360 v0.3.0 Release Notes\r\n\r\nThis is a major release containing a number of new features, improvements, and bugfixes.\r\n\r\n## Highlights\r\n* scikit-learn compatible API for certain algorithms, metrics, and datasets\r\n* Documentation layout was revamped to make it easier to navigate.\r\n* New algorithm:\r\n  * Fairness Gerrymandering [(Kearns, et al., 2018)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.05144)\r\n* New metrics:\r\n  * Differential Fairness [(Foulds, et al., 2018)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.08362)\r\n  * Rich Subgroup Fairness [(Kearns, et al., 2018)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.05144)\r\n\r\n## New Features\u002FImprovements\r\n* Optional dependencies may now be installed using the setuptools \"extras\" option: e.g., `pip install 'aif360[LFR,AdversarialDebiasing]'` or `pip install 'aif360[all]'`\r\n* Added support for integrations with MLOps (Kubeflow and NiFi) and examples\r\n* Added `scores` output to `AdversarialDebiasing.predict()` (#139)\r\n* Added a `subset()` method to `StructuredDataset` (#140)\r\n* Added new `MulticlassLabelDataset` to support basic multiclass problems (#165)\r\n* ### scikit-learn compatibility (#134)\r\n  * EXPERIMENTAL: incomplete, contributions welcome\r\n  * 4 datasets (Adult, German, Bank, Compas) in DataFrame format with protected attributes in the index\r\n    * Automatically downloads from openml.org\r\n  * 6 group fairness metrics as functions (`statistical_parity_difference`, `disparate_impact_ratio`, `equal_opportunity_difference`, `average_odds_difference`, `average_odds_error`, `between_group_generalized_entropy_error`)\r\n  * 2 individual fairness metrics as functions (`generalized_entropy_index` and its variants, `consistency_score`)\r\n  * 5 additional metrics as functions (`specificity_score`, `base_rate`, `selection_rate`, `generalized_fpr`, `generalized_fnr`)\r\n  * `make_scorer` function to wrap metrics for use in sklearn cross-validation functions (#174, #178)\r\n  * 3 algorithms (`Reweighing`, `AdversarialDebiasing`, `CalibratedEqualizedOdds`)\r\n\r\n## Fixes\r\n* Fixed deprecation warning\u002F`NotImplementedError` in `StandardDataset` (#115)\r\n* Fixed age threshold in `GermanDataset` (#129 and #137)\r\n* Corrected privileged\u002Funprivileged attribute values for COMPAS dataset in some demos (#138)\r\n* Fixed base rate computation in EqOddsPostprocessing (#170)\r\n* Improved warning messages when missing optional packages (#170)\r\n* Multiple documentation fixes (#114, #124, #153, #155, #157, #158, #159, #170)\r\n\r\n## New Contributors\r\n@autoih, @romeokienzler, @jimbudarz, @stephanNorsten, @sethneel, @imolloy, @guillemarsan, @gdequeiroz, @chajath, @bhavyaghai, @Tomcli, @swapna-somineni, @chkoar, @motapaolla","2020-06-02T22:59:53",{"id":199,"version":200,"summary_zh":201,"released_at":202},115968,"v0.3.0rc0","# AIF360 v0.3.0rc0 Release Notes\r\n\r\nThis is a major release containing a number of new features, improvements, and bugfixes.\r\n\r\n## Highlights\r\n* scikit-learn compatible API for certain algorithms, metrics, and datasets\r\n* Documentation layout was revamped to make it easier to navigate.\r\n* New algorithm:\r\n  * Fairness Gerrymandering [(Kearns, et al., 2018)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.05144)\r\n* New metrics:\r\n  * Differential Fairness [(Foulds, et al., 2018)](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1807.08362)\r\n  * Rich Subgroup Fairness [(Kearns, et al., 2018)](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.05144)\r\n\r\n## New Features\u002FImprovements\r\n* Optional dependencies may now be installed using the setuptools \"extras\" option: e.g., `pip install 'aif360[LFR,AdversarialDebiasing]'` or `pip install 'aif360[all]'`\r\n* Added support for integrations with MLOps (Kubeflow and NiFi) and examples\r\n* Added `scores` output to `AdversarialDebiasing.predict()` (#139)\r\n* Added a `subset()` method to `StructuredDataset` (#140)\r\n* ### scikit-learn compatibility (#134)\r\n  * EXPERIMENTAL: incomplete, contributions welcome\r\n  * 4 datasets (Adult, German, Bank, Compas) in DataFrame format with protected attributes in the index\r\n    * Automatically downloads from openml.org\r\n  * 6 group fairness metrics as functions (`statistical_parity_difference`, `disparate_impact_ratio`, `equal_opportunity_difference`, `average_odds_difference`, `average_odds_error`, `between_group_generalized_entropy_error`)\r\n  * 2 individual fairness metrics as functions (`generalized_entropy_index` and its variants, `consistency_score`)\r\n  * 5 additional metrics as functions (`specificity_score`, `base_rate`, `selection_rate`, `generalized_fpr`, `generalized_fnr`)\r\n  * 3 algorithms (`Reweighing`, `AdversarialDebiasing`, `CalibratedEqualizedOdds`)\r\n\r\n## Fixes\r\n* Fixed deprecation warning\u002F`NotImplementedError` in `StandardDataset` (#115)\r\n* Fixed age threshold in `GermanDataset` (#129 and #137)\r\n* Corrected privileged\u002Funprivileged attribute values for COMPAS dataset in some demos (#138)\r\n* Multiple documentation fixes (#114, #124, #153, #155, #157, #158, #159)\r\n\r\n## New Contributors\r\n@autoih, @romeokienzler, @jimbudarz, @stephanNorsten, @sethneel, @imolloy, @guillemarsan, @gdequeiroz, @chajath, @bhavyaghai, @Tomcli","2020-04-03T19:43:10",{"id":204,"version":205,"summary_zh":206,"released_at":207},115969,"v0.2.3","AIF360 v0.2.3 Release Notes\r\n=====================\r\n\r\nFixes\r\n-----\r\n* Fixed `fit_predict` arguments in `RejectOptionClassification` (#111)\r\n* Removed Orange3 from requirements (#113)","2020-03-09T20:05:10",{"id":209,"version":210,"summary_zh":211,"released_at":212},115970,"v0.2.2","# AIF360 v0.2.2 Release Notes\r\n\r\n## Fixes\r\n* Removed Gender Classification tutorial (see #101 for details and discussion)\r\n* Bug fix in Optimized Preprocessing to check for optimality correctly","2019-09-16T16:54:48",{"id":214,"version":215,"summary_zh":216,"released_at":217},115971,"v0.2.1","# AIF360 v0.2.1 Release Notes\r\n\r\n## Backwards-Incompatible Changes\r\n\r\n* Deprecated support for Python 2.7\r\n\r\n## Fixes\r\n\r\n* See issues #80, #83\r\n* Also PRs #86, #90","2019-08-13T02:44:20",{"id":219,"version":220,"summary_zh":221,"released_at":222},115972,"v0.2.0","# AIF360 v0.2.0 Release Notes\r\n## Highlights\r\nNew Algorithm:\r\n* Meta-Algorithm for Fair Classification ([Celis et al.. 2018](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.06055))\r\n\r\n## New Features\u002FImprovements\r\n* Added download script for MEPS data\r\n* Added ability to choose protected attribute for `DisparateImpactRemover`\r\n* Updated `OptimPreproc` to use the latest version of `cvxpy`\r\n* Added a threshold value to update `labels` from predicted `scores` in `CalibratedEqOddsPostprocessing`\r\n* New `scores_names` arg in `StructuredDataset` allows for easier importing of predictions run elsewhere\r\n* `tutorial_gender_classification` notebook now uses `skimage` instead of `cv2`\r\n* `aif360.__version__` now returns the correct version string\r\n\r\n## Fixes\r\n* Changed Credit Scoring Tutorial to use `Reweighing`; added new demo using `AdversarialDebiasing` on Adult Dataset\r\n* Removed dependency on `subprocess.run` in `PrejudiceRemover` for Python 2.7 compatibility\r\n* Fixed bug where `categorical_features` would not take into account `features_to_drop` in `StandardDataset`\r\n\r\n## New Contributors\r\n@ckadner, @cclauss, @vijaykeswani, @ffosilva, @kant, @adrinjalali, @mariaborbones ","2019-01-23T21:02:28",{"id":224,"version":225,"summary_zh":226,"released_at":227},115973,"v0.1.1","# AIF360 0.1.1 Release Notes\r\nThis update contains no feature changes.\r\n\r\n## Fixes\r\n* changes to description for PyPI","2018-09-18T19:07:51",{"id":229,"version":230,"summary_zh":231,"released_at":232},115974,"v0.1.0","# AIF360 0.1.0 Release Notes\r\nThe AI Fairness 360 toolkit is an open-source library to help detect and remove bias in machine learning models. The AI Fairness 360 Python package includes a comprehensive set of metrics for datasets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in datasets and models.\r\n\r\n## Highlights\r\nA brief list of features provided in this release include:\r\n* Algorithms:\r\n    * Optimized Preprocessing ([Calmon et al., 2017](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6988-optimized-pre-processing-for-discrimination-prevention))\r\n    * Disparate Impact Remover ([Feldman et al., 2015](https:\u002F\u002Fdoi.org\u002F10.1145\u002F2783258.2783311))\r\n    * Equalized Odds Postprocessing ([Hardt et al., 2016](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6374-equality-of-opportunity-in-supervised-learning))\r\n    * Reweighing ([Kamiran and Calders, 2012](http:\u002F\u002Fdoi.org\u002F10.1007\u002Fs10115-011-0463-8))\r\n    * Reject Option Classification ([Kamiran et al., 2012](https:\u002F\u002Fdoi.org\u002F10.1109\u002FICDM.2012.45))\r\n    * Prejudice Remover Regularizer ([Kamishima et al., 2012](https:\u002F\u002Frd.springer.com\u002Fchapter\u002F10.1007\u002F978-3-642-33486-3_3))\r\n    * Calibrated Equalized Odds Postprocessing ([Pleiss et al., 2017](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7151-on-fairness-and-calibration))\r\n    * Learning Fair Representations ([Zemel et al., 2013](http:\u002F\u002Fproceedings.mlr.press\u002Fv28\u002Fzemel13.html))\r\n    * Adversarial Debiasing ([Zhang et al., 2018](http:\u002F\u002Fwww.aies-conference.com\u002Fwp-content\u002Fpapers\u002Fmain\u002FAIES_2018_paper_162.pdf))\r\n* Datasets Interface (raw data not included)\r\n    * UCI ML Repository: Adult, German Credit, Bank Marketing\r\n    * ProPublica Recidivism\r\n    * Medical Expenditure Panel Survey\r\n* Metrics\r\n    * Comprehensive set of group fairness metrics derived from selection rates and error rates\r\n    * Comprehensive set of sample distortion metrics\r\n    * Generalized Entropy Index ([Speicher et al., 2018](https:\u002F\u002Fdoi.org\u002F10.1145\u002F3219819.3220046))\r\n* Metric Explanations\r\n    * Text and JSON output formats supported\r\n","2018-09-18T19:07:37"]