[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-EthicalML--awesome-artificial-intelligence-regulation":3,"similar-EthicalML--awesome-artificial-intelligence-regulation":59},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":15,"owner_avatar_url":16,"owner_bio":17,"owner_company":18,"owner_location":18,"owner_email":19,"owner_twitter":14,"owner_website":20,"owner_url":21,"languages":18,"stars":22,"forks":23,"last_commit_at":24,"license":25,"difficulty_score":26,"env_os":27,"env_gpu":28,"env_ram":28,"env_deps":29,"category_tags":32,"github_topics":37,"view_count":53,"oss_zip_url":18,"oss_zip_packed_at":18,"status":54,"created_at":55,"updated_at":56,"faqs":57,"releases":58},9111,"EthicalML\u002Fawesome-artificial-intelligence-regulation","awesome-artificial-intelligence-regulation","This repository aims to map the ecosystem of artificial intelligence guidelines, principles, codes of ethics, standards, regulation and beyond.","awesome-artificial-intelligence-regulation 是一个致力于梳理全球人工智能治理生态的开源知识库。随着 AI 技术深入社会，从业者常面临复杂的伦理挑战和法律合规难题，而网络上相关的原则、框架和法规分散且难以检索。该项目正是为了解决这一痛点，将全球各地的 AI 指南、道德准则、行业标准及法律法规进行了系统化的映射与整理。\n\n其内容覆盖广泛，不仅按经济区域（如中国、欧盟、美国、巴西等）分类汇总了国家级监管政策，还提供了高层级框架、行业标准化倡议、实用检查清单、互动工具以及在线学习资源。无论是需要确保产品合规的开发者、关注伦理边界的研究人员，还是制定企业策略的管理者，都能在此快速找到所需的参考依据。\n\n该项目的独特亮点在于其“地图式”的结构设计，将原本晦涩难懂的法律条文和抽象的伦理原则转化为清晰可查的资源索引，并支持多语言访问。它不生产新的规则，而是充当了连接技术与规范的桥梁，帮助用户在面对未知的伦理困境时，能够有据可依，负责任地开发和部署人工智能系统。","[![Awesome](images\u002Fawesome.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n[![Maintenance](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMaintained%3F-YES-green.svg)](https:\u002F\u002Fgithub.com\u002FEthicalML\u002Fawesome-artificial-intelligence-guidelines\u002Fgraphs\u002Fcommit-activity)\n![GitHub](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRelease-PROD-yellow.svg)\n![GitHub](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLanguages-MULTI-blue.svg)\n![GitHub](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-lightgrey.svg)\n[![GitHub](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Faxsaucedo.svg?label=Follow)](https:\u002F\u002Ftwitter.com\u002FAxSaucedo\u002F)\n\n\n\u003Ctable>\n\u003Ctr>\n\u003Ctd width=\"60%\">\n\u003Ch1>Awesome AI Regulation, Principles & Guidelines\u003C\u002Fh1>\n\u003C\u002Ftd>\n\u003Ctd>\n\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FdKjCWfuvYxQ?t=147\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEthicalML_awesome-artificial-intelligence-regulation_readme_e809e9f2ed68.gif\">\u003C\u002Fa> \u003Cbr> (AKA Writing AI Responsibly)\n\u003C\u002Ftd>\n\u003C\u002Ftd>\n\u003C\u002Ftable>\n\n## Overview\n\nAs AI systems become more prevalent in society, we face bigger and tougher societal challenges. Given many of these challenges have not been faced before, practitioners will face scenarios that will require dealing with hard ethical and societal questions.\n\nThere has been a large amount of content published which attempts to address these issues through \"Principles\", \"Ethics Frameworks\", \"Checklists\" and beyond. However navigating the broad number of resources is not easy.\n\nThis repository aims to simplify this by mapping the ecosystem of guidelines, principles, codes of ethics, standards and regulation being put in place around artificial intelligence.\n\n## Quick links to sections in this page\n\n### National Regulation by Economic Area\n\n| | | | |\n|-|-|-|-|\n|[Austria 🇦🇹](#austria)|[Brazil 🇧🇷](#brazil)|[Canada 🇨🇦](#canada)|[China 🇨🇳](#china)|\n|[Dubai 🇦🇪](#dubai)|[European Union 🇪🇺](#european-union)|[India 🇮🇳](#india)|[Ireland 🇮🇪](#ireland)|\n|[Israel 🇮🇱](#israel)|[Mexico 🇲🇽](#mexico)|[Singapore 🇸🇬](#singapore)|[Switzerland 🇨🇭](#switzerland)|\n|[United Arab Emirates 🇦🇪](#united-arab-emirates)|[United Kingdom 🇬🇧](#united-kingdom)|[United States of America 🇺🇸](#united-states-of-america)|\n\n### Other Sections\n\n| | | |\n|-|-|-|\n|[🔍 High Level Frameworks & Principles](#high-level-frameworks-and-principles) |[📜 Industry standards initiatives](#industry-standards-initiatives)| [🔨 Interactive & Practical Tools](#interactive-and-practical-tools)|\n|[📚 Online Courses](#online-courses-and-learning-resources)|[🔏 Processes & Checklists](#processes-and-checklists)|[🤖 Research and Industry Newsletters](#research-and-industry-newsletters)|\n\n## Other relevant resources\n\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd width=\"30%\">\n         You can join the \u003Ca href=\"https:\u002F\u002Fethical.institute\u002Fmle.html\">Machine Learning Engineer\u003C\u002Fa> newsletter. You will receive updates on open source frameworks, tutorials and articles curated by machine learning professionals.\n    \u003C\u002Ftd>\n    \u003Ctd width=\"70%\">\n        \u003Ca href=\"https:\u002F\u002Fethical.institute\u002Fmle.html\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEthicalML_awesome-artificial-intelligence-regulation_readme_bf9a0a64abe8.png\">\u003C\u002Fa>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n\n# Regulation and Policy\n\n## Austria\n\n* [Artificial Intelligence Mission Austria 2030](https:\u002F\u002Fwww.bmk.gv.at\u002Fthemen\u002Finnovation\u002Fpublikationen\u002Fikt\u002Fai\u002Faimat.html) - Shaping the Future of Artificial Intelligence in Austria. The Austrian ministry for Innovation and Technology published their vision for AI until 2030.\n\n## Brazil\n\n* [Brazilian AI Regulation (PL 2338\u002F2023)](https:\u002F\u002Fwww25.senado.leg.br\u002Fweb\u002Fatividade\u002Fmaterias\u002F-\u002Fmateria\u002F157233) - A proposed bill in Brazil aiming to establish a comprehensive framework for the development and use of artificial intelligence, emphasizing transparency, accountability, and alignment with international standards.\n\n## Canada\n\n* [Artificial Intelligence and Data Act (AIDA)](https:\u002F\u002Fised-isde.canada.ca\u002Fsite\u002Finnovation-better-canada\u002Fen\u002Fartificial-intelligence-and-data-act-aida-companion-document) - An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts\n\n## China\n\n* [Beijing AI Principles](https:\u002F\u002Fwww.baai.ac.cn\u002Fblog\u002Fbeijing-ai-principles) - initiative for the research, development, use, governance and long-term planning of AI, calling for its healthy development to support the construction of a human community with a shared future, and the realization of beneficial AI for humankind and nature.\n* [China Internet Security Law](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FChina_Internet_Security_Law) - China's law which enacted to increase cybersecurity and national security, safeguard cyberspace sovereignty and public interest, protect the legitimate rights and interests of citizens, legal persons and other organisations, and promote healthy economic and social development (and was argued by the Chinese ministry for industry and information that this law justified the means of pursuing the \"Going Out\" strategy China has persisted on ever since 1999). [KPMG's summary of the Cybersecurity Law](https:\u002F\u002Fassets.kpmg\u002Fcontent\u002Fdam\u002Fkpmg\u002Fcn\u002Fpdf\u002Fen\u002F2017\u002F02\u002Foverview-of-cybersecurity-law.pdf). Center for strategic & international studies [overview of China's new Data Privacy law](https:\u002F\u002Fwww.csis.org\u002Fanalysis\u002Fnew-china-data-privacy-standard-looks-more-far-reaching-gdpr)\n* [China's Administrative Provisions on Information Services on Microblogs](http:\u002F\u002Fen.pkulaw.cn\u002Fdisplay.aspx?cgid=309714&lib=law) - China's provisions which require microblogging sites (social media sites) to obtain relevant credentials by law, verify users' real identities, establish mechanisms for dispelling and refuting rumors, etc. [Summary of rules](https:\u002F\u002Fwww.loc.gov\u002Flaw\u002Fforeign-news\u002Farticle\u002Fchina-rules-regulating-microblogs-issued\u002F) by the US Law Library of Congress.\n* [China's Interim Measures for the Management of Generative Artificial Intelligence Services](https:\u002F\u002Fwww.cac.gov.cn\u002F2023-07\u002F13\u002Fc_1690898327029107.htm) - The first administrative regulation on the management of Generative AI services, which came into effect on August 15, 2023.\n* [China's Personal Information Security Specification (Translation)](https:\u002F\u002Fwww.newamerica.org\u002Fcybersecurity-initiative\u002Fdigichina\u002Fblog\u002Ftranslation-chinas-personal-information-security-specification\u002F) - The Chinese Government's first major digital privacy rules which took effect in May 2018, which lays out granular guidelines for consent and how personal data should be collected, used and shared. Center for strategic & international studies [overview of the specification](https:\u002F\u002Fwww.csis.org\u002Fanalysis\u002Fchinas-emerging-data-privacy-system-and-gdpr)\n* [Decision on strengthening the protection of online information](https:\u002F\u002Fwww.globalprivacyblog.com\u002Fprivacy\u002Fchinas-legislature-adopts-decision-on-strengthening-the-protection-of-online-information\u002F) - The Standing Committee of the National People's Congress (NPC) of the People's Republic of China adopted the decision on strengthening the protection of online information - this is an act that contains 12 clauses applicable to entities both in the public and private sectors in respect to the collection and processing of electronic personal information on the internet.\n* [Personal Data Protection Act](https:\u002F\u002Flaw.moj.gov.tw\u002FENG\u002FLawClass\u002FLawAll.aspx?pcode=I0050021) - The personal data protection act of the Republic of China, which is enacted to regulate the collection, processing and use of personal data as so to prevent harm on personality rights, and to facilitate the proper use of personal data.\n\n## Dubai\n\n* [Smart Dubai Artificial Intelligence Principles and Ethics - Ethical AI Toolkit](https:\u002F\u002Fwww.smartdubai.ae\u002Finitiatives\u002Fai-principles-ethics) - created to provide practical help across a city ecosystem. It supports industry, academia and individuals in understanding how AI systems can be used responsibly. It consists of principles and guidelines, and a self-assessment tool for developers to assess their platforms.\n\n## European Union\n\n* [Ethics Guidelines for Trustworthy AI](https:\u002F\u002Fec.europa.eu\u002Ffuturium\u002Fen\u002Fai-alliance-consultation) - European Commission document prepared by the High-Level Expert Group on Artificial Intelligence (AI HLEG).\n* [EU AI Act](https:\u002F\u002Fartificialintelligenceact.eu\u002F) - The EU Artificial Intelligence (AI) Act is a legal framework that regulates AI in the European Union; this was the first regulation implemented on AI ([overview](https:\u002F\u002Fwww.europarl.europa.eu\u002Ftopics\u002Fen\u002Farticle\u002F20230601STO93804\u002Feu-ai-act-first-regulation-on-artificial-intelligence)).\n* [GDPR.EU Guide](https:\u002F\u002Fgdpr.eu\u002F) - A project co-funded by the Horizon 2020 Framework programme of the EU which provides a resource for organisations and individuals researching GDPR, including a library of straightforward and up-to-date information to help organisations achieve GDPR compliance ([Legal Text](https:\u002F\u002Fwww.govinfo.gov\u002Fcontent\u002Fpkg\u002FUSCODE-2012-title5\u002Fpdf\u002FUSCODE-2012-title5-partI-chap5-subchapII-sec552a.pdf)).\n* [General Data Protection Regulation GDPR](https:\u002F\u002Feur-lex.europa.eu\u002Flegal-content\u002FEN\u002FTXT\u002F?uri=celex%3A32016R0679) - Legal text for the EU GDPR regulation 2016\u002F679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95\u002F46\u002FEC\n\n## India\n\n* [IT Amendment Rules, 2026](https:\u002F\u002Fwww.meity.gov.in\u002Fstatic\u002Fuploads\u002F2026\u002F02\u002F550681ab908f8afb135b0ad42816a1c9.pdf) - These are updated rules that amend the IT Rules, 2021 to further strengthen due diligence framework for intermediaries, particularly in relation to synthetically generated information (SGI) and associated online harms. Frequently asked questions [here](https:\u002F\u002Fwww.meity.gov.in\u002Fstatic\u002Fuploads\u002F2025\u002F10\u002F065b6deb585441b5ccdf8be42502a49c.pdf).\n* [The Digital Personal Data Protection (DPDP) Act](https:\u002F\u002Fwww.meity.gov.in\u002Fwritereaddata\u002Ffiles\u002FDigital%20Personal%20Data%20Protection%20Act%202023.pdf) - India's DPDP Act aims to provide for the processing of digital personal data in a manner that recognises both the right of individuals to protect their personal data and the need to process such personal data for lawful purposes and for matters connected therewith or incidental thereto. The DPDP Act was passed in the Parliament of India in August 2023 and it's implementation is being driven by [MeitY](https:\u002F\u002Fwww.meity.gov.in\u002F), which has released draft rules based on the act for public consultation on 3rd January 2025. The draft DPDP rules are available for public review and feedback [here](https:\u002F\u002Finnovateindia.mygov.in\u002Fdpdp-rules-2025\u002F).\n* [The National Strategy for Artificial Intelligence](https:\u002F\u002Fwww.niti.gov.in\u002Fsites\u002Fdefault\u002Ffiles\u002F2023-03\u002FNational-Strategy-for-Artificial-Intelligence.pdf) - The approach in this paper focuses on how India can leverage the transformative technologies to ensure social and inclusive growth in line with the development philosophy of the government. In addition, India should strive to replicate these solutions in other similarly placed developing countries.\n* [The Principles for Responsible AI](https:\u002F\u002Fwww.niti.gov.in\u002Fsites\u002Fdefault\u002Ffiles\u002F2021-02\u002FResponsible-AI-22022021.pdf) - The paper incorporates insights, feedback and experiences consolidated through inter-ministerial consultations, large-scale global multi-stakeholder consultations and a series of 1-1 consultations with AI ethics experts in India and globally, as well as wider public consultations, conducted over the last 15 months. This paper is meant to serve as an essential roadmap for the AI ecosystem, encouraging adoption of AI in a responsible manner in India and building public trust in the use of this technology, placing the idea of 'AI for All' at its very core.\n\n## Ireland\n\n* [National AI Strategy of Ireland](https:\u002F\u002Fassets.gov.ie\u002Fstatic\u002Fdocuments\u002F5e511b3a\u002FNational_Digital_and_AI_Strategy_2030.pdf) - Ireland releases their national AI strategy in 2026 tackling literacy, infrastructure, adoption, etc.\n\n## Israel\n\n* [The principles of the policy for the responsible development of the field of AI](https:\u002F\u002Fwww.gov.il\u002Fen\u002Fpages\u002Fmost-news20221117) - The draft policy for regulation and ethics in the field of artificial intelligence, with an emphasis on \"responsible innovation\", is intended to ensure the advancement of the industry while safeguarding the public interest\n\n## Mexico\n\n* [Artificial Intelligence National Agenda Proposal for Mexico](https:\u002F\u002Fwww.ania.org.mx\u002F_files\u002Fugd\u002F447d95_c7e6ebee6cf44b38a0d386cc9534f6e5.pdf) - Recommendations for multi-faceted commitment to guide the development and use of artificial intelligence in Mexico in an ethical and responsible manner.\n\n## Singapore\n\n* [Data Protection Act 2012](https:\u002F\u002Fsso.agc.gov.sg\u002FAct\u002FPDPA2012) - The Personal Data Protection Act 2012 (the \"Act\") sets out the law on data protection in Singapore. Apart from establishing a general data protection regime, the Act also regulates telemarketing practices.\n* [Protection from Online Falsehoods and Manipulation Act 2019](https:\u002F\u002Fsso.agc.gov.sg\u002FActs-Supp\u002F18-2019\u002FPublished\u002F20190625?DocDate=20190625) - An act to prevent the electronic communication in Singapore of false statements of fact, to suppress support for and counteract the effects of such communication, to safeguard against the use of online accounts for such communication and for information manipulation, to enable measures to be taken to enhance transparency of online political advertisements and for related matters.\n\n## Switzerland\n\n* [New Federal Act on Data Protection (nFADP)](https:\u002F\u002Fwww.fedlex.admin.ch\u002Feli\u002Fcc\u002F2022\u002F491\u002Fen) - The New Federal Act on Data Protection (nFADP) was passed in the Swiss Parliament in 2020 and came into effect from September 1, 2023. It aims to improve the processing of personal data and grant new rights to the people concerned, and also comes with a number of obligations for companies. Every company, Swiss or otherwise, offering goods and services to Swiss citizens and processing their personal data is subject to the nFADP. More highlights on nFADP available [here](https:\u002F\u002Fwww.kmu.admin.ch\u002Fkmu\u002Fen\u002Fhome\u002Ffacts-and-trends\u002Fdigitization\u002Fdata-protection\u002Fnew-federal-act-on-data-protection-nfadp.html).\n\n## United Arab Emirates\n\n* [UAE National Strategy for AI](https:\u002F\u002Fai.gov.ae\u002Fwp-content\u002Fuploads\u002F2021\u002F07\u002FUAE-National-Strategy-for-Artificial-Intelligence-2031.pdf) - This paper outlines the UAE's ambitions to become a fast adopter of emerging AI technologies across Government, as well as attract top AI talent to experiment with new technologies and work in a sophisticated, secure ecosystem to solve complex problems.\n\n## United Kingdom\n\n* [The Information Commissioner's Office guide to Data Protection](https:\u002F\u002Fico.org.uk\u002Ffor-organisations\u002Fguide-to-data-protection\u002F) - This guide is for data protection officers and others who have day-to-day responsibility for data protection. It is aimed at small and medium-sized organisations, but it may be useful for larger organisations too.\n* [UK Data Protection Act of 2018](http:\u002F\u002Fwww.legislation.gov.uk\u002Fukpga\u002F2018\u002F12\u002Fcontents\u002Fenacted) - The DPA 2018 enacts the GDPR into UK Law, however in doing so has included various \"derogations\" as permitted by the GDPR, resulting in some key differenced (which although small are not of insignificance impact and may have a greater impact after Brexit).\n* [UK's AI regulation: a pro-innovation approach](https:\u002F\u002Fwww.gov.uk\u002Fgovernment\u002Fpublications\u002Fai-regulation-a-pro-innovation-approach) - This white paper details the UK's plans for implementing a pro-innovation approach to AI regulation.\n\n## United States of America\n\n* [California Consumer Privacy Act (CCPA)](http:\u002F\u002Fleginfo.legislature.ca.gov\u002Ffaces\u002FbillCompareClient.xhtml?bill_id=201720180AB375) - Legal text for California's consumer privacy act\n* [DoD's Ethical Principles for AI](https:\u002F\u002Fwww.diu.mil\u002Fresponsible-ai-guidelines) - The U.S.A. Department of Defense responsible AI guidelines for tech contractors. The guidelines provide a step-by-step process to follow during the planning, development, and deployment phases of the technical lifecycle.\n* [EU-U.S. and Swiss-U.S. Privacy Shield Frameworks](https:\u002F\u002Fwww.privacyshield.gov\u002Fwelcome) - The EU-U.S. and Swiss-U.S. Privacy Shield Frameworks were designed by the U.S. Department of Commerce and the European Commission and Swiss Administration to provide companies on both sides of the Atlantic with a mechanism to comply with data protection requirements when transferring personal data from the European Union and Switzerland to the United States in support of transatlantic commerce.\n* [Executive Order on Maintaining American Leadership in AI](https:\u002F\u002Fwww.whitehouse.gov\u002Fpresidential-actions\u002Fexecutive-order-maintaining-american-leadership-artificial-intelligence\u002F) - Official mandate by the President of the US to\n* [Fair Credit Reporting Act 2018](https:\u002F\u002Fwww.ftc.gov\u002Fenforcement\u002Fstatutes\u002Ffair-credit-reporting-act) - The Fair Reporting Act is a federal law that regulates the collection of consumers' credit information and access to their credit reports.\n* [Gramm-Leach-Billey Act (for financial institutions)](https:\u002F\u002Fwww.ftc.gov\u002Ftips-advice\u002Fbusiness-center\u002Fprivacy-and-security\u002Fgramm-leach-bliley-act) - The Graham-Leach-Billey Act requires financial institutions (companies that offer consumers financial projects or services like loans, financial, or investment advice, or insurance) to explain their information-sharing practices to their customers and to safeguard sensitive data.\n* [Health Insurance Portability and Accountability Act of 1996](https:\u002F\u002Fwww.hhs.gov\u002Fhipaa\u002Ffor-professionals\u002Fsecurity\u002Flaws-regulations\u002Findex.html) - The HIPAA required the secretary of the US department of health and human services (HHS) to develop regulations protecting the privacy and security of certain health information, which then HHS published what is known as the HIPAA [Privacy Rule](https:\u002F\u002Fwww.hhs.gov\u002Fhipaa\u002Ffor-professionals\u002Fprivacy\u002Findex.html), and the HIPAA [Security Rule](https:\u002F\u002Fwww.hhs.gov\u002Fhipaa\u002Ffor-professionals\u002Fsecurity\u002Findex.html).\n* [Privacy Act of 1974](https:\u002F\u002Fwww.justice.gov\u002Fopcl\u002Fprivacy-act-1974) - The privacy act of 1974 which establishes a code of fair information practices that governs the collection, maintenance, use and dissemination of information about individuals that is maintained in systems of records by federal agencies.\n* [Privacy Protection Act of 1980](https:\u002F\u002Fepic.org\u002Fprivacy\u002Fppa\u002F) - The Privacy Protection Act of 1980 protects journalists from being required to turn over to law enforcement any work product and documentary materials, including sources, before it is disseminated to the public.\n* [The White House Executive Order in AI](https:\u002F\u002Fwww.whitehouse.gov\u002Fbriefing-room\u002Fpresidential-actions\u002F2023\u002F10\u002F30\u002Fexecutive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence\u002F) - The USA Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence\n\n# High Level Frameworks and Principles\n\n* [AI & Machine Learning 8 principles for Responsible ML](https:\u002F\u002Fethical.institute\u002Fprinciples.html) - The Institute for Ethical AI & Machine Learning has put together 8 principles for responsible machine learning that are to be adopted by individuals and delivery teams designing, building and operating machine learning systems.\n* [Algorithm charter for Aotearoa New Zealand](https:\u002F\u002Fdata.govt.nz\u002Fuse-data\u002Fdata-ethics\u002Fgovernment-algorithm-transparency-and-accountability\u002Falgorithm-charter) - The  Algorithm  Charter  for  Aotearoa  New  Zealand  is  an  evolving piece of work that needs to respond to emerging technologies  and  also  be  fit-for-purpose  for  government  agencies.\n* [An Evaluation of Guidelines - The Ethics of Ethics](https:\u002F\u002Farxiv.org\u002Fftp\u002Farxiv\u002Fpapers\u002F1903\u002F1903.03425.pdf) - A research paper that analyses multiple Ethics principles\n* [Association for Computer Machinery's Code of Ethics and Professional Conduct](https:\u002F\u002Fwww.acm.org\u002Fcode-of-ethics) - This is the code of ethics that has been put together in 1992 by the Association for Computer Machinery and updated in 2018. The Code is designed to inspire and guide the ethical conduct of all computing professionals, including current and aspiring practitioners, instructors, students, influencers, and anyone who uses computing technology in an impactful way. Additionally, the Code serves as a basis for remediation when violations occur. The Code includes principles formulated as statements of responsibility, based on the understanding that the public good is always the primary consideration.\n* [Boundary & Consent Reference Frameworks](https:\u002F\u002Fshadow-hickory-d63.notion.site\u002FBoundary-Consent-Reference-Frameworks-2f99b44ae6578072992cf2725684bcf0?source=copy_link) - For reference only. Defines non-execution judgment boundaries and consent-state gating for AI systems.\n* [Declaration for responsible and intelligent data practice](https:\u002F\u002Fwww.declaration.org.uk\u002F) - a shared vision of what best practice in data looks like by [Open Data Manchester](https:\u002F\u002Fwww.opendatamanchester.org.uk\u002F).\n* [European Commission's Guidelines for Trustworthy AI](https:\u002F\u002Fdigital-strategy.ec.europa.eu\u002Fen\u002Flibrary\u002Fethics-guidelines-trustworthy-ai) - The Ethics Guidelines for Trustworthy Artificial Intelligence (AI) is a document prepared by the High-Level Expert Group on Artificial Intelligence (AI HLEG). This independent expert group was set up by the European Commission in June 2018, as part of the AI strategy announced earlier that year.\n* [From What to How: An initial review of publicly available AI Ethics Tools, Methods and Research to translate principles into practices](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.06876) - A paper published by the UK Digital Catapult that aims to identify and present the gap between principles and their practical applications.\n* [IEEE's Ethically Aligned Design](https:\u002F\u002Fethicsinaction.ieee.org\u002F) - A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems that encourages technologists to prioritize ethical considerations in the creation of autonomous and intelligent technologies.\n* [Montreal AI Ethics Institute State of AI Ethics June 2020 Report](https:\u002F\u002Fmontrealethics.ai\u002Fthe-state-of-ai-ethics-report-june-2020\u002F) - A resource put together by the [Montreal AI Ethics Institute](https:\u002F\u002Fmontrealethics.ai) that captures the most relevant research and reporting in the domain of AI ethics between March 2020 and June 2020.\n* [Montréal Declaration for a responsible development of artificial intelligence](https:\u002F\u002Fwww.montrealdeclaration-responsibleai.com\u002Fthe-declaration) -  ethical principles and values that promote the fundamental interests of people and group created as an initiative by Université de Montréal\n* [Oxford's Recommendations for AI Governance](https:\u002F\u002Fwww.fhi.ox.ac.uk\u002Fwp-content\u002Fuploads\u002FStandards_-FHI-Technical-Report.pdf) - A set of recommendations from Oxford's Future of Humanity institute which focus on the infrastructure and attributes required for efficient design, development, and research around the ongoing work building & implementing AI standards.\n* [PWC's Responsible AI](https:\u002F\u002Fwww.pwc.com\u002Fgx\u002Fen\u002Fissues\u002Fdata-and-analytics\u002Fartificial-intelligence\u002Fwhat-is-responsible-ai.html) - PWC has put together a survey and a set of principles that abstract some of the key areas they've identified for responsible AI.\n* [Recommendation on the Ethics of Artificial Intelligence](https:\u002F\u002Funesdoc.unesco.org\u002Fark:\u002F48223\u002Fpf0000381137) - The Recommendation by the UNESCO is a comprehensive international framework aiming to shape the development and use of AI technologies and establishes a set of values in line with the promotion and protection of human rights, human dignity, and environmental sustainability. It has been adopted by acclamation by 193 Member States at UNESCO's General Conference in November 2021. For more information, refer to UNESCO's 2023 publication on key facts [here](https:\u002F\u002Fwww.unesco.org\u002Fen\u002Farticles\u002Funescos-recommendation-ethics-artificial-intelligence-key-facts).\n* [SHELET Protocol](https:\u002F\u002Fmordechaipotash.github.io\u002Fpoly-wiki-public\u002F) - A structured governance loop (Sense → Hypothesize → Elect → Let → Evaluate → Tighten) for human oversight of AI systems, emphasizing sovereignty through iterative compression rather than rigid control. [Source](https:\u002F\u002Fgithub.com\u002Fmordechaipotash\u002Fpoly-wiki-public)\n* [Singapore Data Protection Govt Commission's AI Governance Principles](https:\u002F\u002Fwww.pdpc.gov.sg\u002Fhelp-and-resources\u002F2020\u002F01\u002Fmodel-ai-governance-framework) - The Singapore government's Personal Data Protection Commission has put together a set of guiding principles towards data protection and human involvement in automated systems, and comes with a report that breaks down the [guiding principles and motivations](https:\u002F\u002Fwww.pdpc.gov.sg\u002F-\u002Fmedia\u002FFiles\u002FPDPC\u002FPDF-Files\u002FResource-for-Organisation\u002FAI\u002FPrimer-for-2nd-edition-of-AI-Gov-Framework.pdf?la=en).\n* [Technical and Organizational Best Practices](https:\u002F\u002Fwww.fbpml.org\u002Fthe-best-practices\u002Fthe-best-practices) - A resource put together by [Foundation for Best Practices in Machine Learning (FBPML)](https:\u002F\u002Fwww.fbpml.org\u002F) with technical guidelines (e.g. fairness and non-discrimination, monitoring and maintenance, data quality, product traceability, explainability) and organizational guidelines (e.g. data governance, product management, human resources management, compliance and auditing). Community contributions are welcome via the [FBPML Wiki](https:\u002F\u002Fwiki.fbpml.org\u002Fwiki\u002FMain_Page).\n* [Toronto Declaration](https:\u002F\u002Fwww.accessnow.org\u002Fthe-toronto-declaration-protecting-the-rights-to-equality-and-non-discrimination-in-machine-learning-systems\u002F) - Protecting the right to equality and non-discrimination in machine learning systems by accessnow.\n* [UK Government's Data Ethics Framework Principles](https:\u002F\u002Fwww.gov.uk\u002Fgovernment\u002Fpublications\u002Fdata-ethics-framework\u002Fdata-ethics-framework) - A resource put together by the Department for Digital, Culture, Media and Sport (DCMS) which outlines an overview of data ethics, together with a 7-principle framework.\n* [Understanding artificial intelligence ethics and safety](https:\u002F\u002Fwww.turing.ac.uk\u002Fresearch\u002Fpublications\u002Funderstanding-artificial-intelligence-ethics-and-safety) - A guide for the responsible design and implementation of AI systems in the public sector by David Leslie from the [Alan Turing Institute](https:\u002F\u002Fwww.turing.ac.uk\u002F).\n\n# Industry standards initiatives\n\n* [ACS Code of Professional Conduct - PDF](https:\u002F\u002Fwww.acs.org.au\u002Fcontent\u002Fdam\u002Facs\u002Frules-and-regulations\u002FCode-of-Professional-Conduct_v2.1.pdf) - Australian ICT (Information and Communication Technology) sector professional organization.\n* [Association for Computer Machinery's Code of Ethics and Professional Conduct](https:\u002F\u002Fwww.acm.org\u002Fcode-of-ethics) - This is the code of ethics that has been put together in 1992 by the Association for Computer Machinery and updated in 2018. The Code is designed to inspire and guide the ethical conduct of all computing professionals, including current and aspiring practitioners, instructors, students, influencers, and anyone who uses computing technology in an impactful way. Additionally, the Code serves as a basis for remediation when violations occur. The Code includes principles formulated as statements of responsibility, based on the understanding that the public good is always the primary consideration.\n* [IEEE Global Initiative for Ethical Considerations in Artificial Intelligence (AI) and Autonomous Systems (AS)](https:\u002F\u002Fethicsinaction.ieee.org\u002F) - IEEE Approved Standards Projects specifically focused on the Ethically Aligned Design principles, and includes 14 (P700X) standards which cover topics from data collection to privacy, to algorithmic bias and beyond.\n* [ISO\u002FIEC's Standards for Artificial Intelligence](https:\u002F\u002Fwww.iso.org\u002Fcommittee\u002F6794475\u002Fx\u002Fcatalogue\u002F) - The ISO's initiative for Artificial Intelligence standards, which include a large set of subsequent standards ranging across Big Data, AI Terminology, Machine Learning frameworks, etc.\n* [Open Ethics Transparency Protocol](https:\u002F\u002Fopenethics.ai\u002Foetp\u002F) - The Open Ethics Transparency Protocol ([OETP](https:\u002F\u002Fgithub.com\u002FOpenEthicsAI\u002FOETP) or Protocol) describes the creation and exchange of voluntary ethics Disclosures for products that involve Automated Decision-Making (ADM), including AI-powered services, Robotic process automation (RPA) tools, and robots. It is brought as a solution to increase the transparency of how digital products with full or partial automation are built and deployed.\n\n# Interactive and Practical Tools\n\n* [Aequitas' Bias & Fairness Audit Toolkit](http:\u002F\u002Faequitas.dssg.io\u002F) - The Bias Report is powered by Aequitas, an open-source bias audit toolkit for machine learning developers, analysts, and policymakers to audit machine learning models for discrimination and bias, and make informed and equitable decisions around developing and deploying predictive risk-assessment tools.\n* [AI Atlas Nexus](https:\u002F\u002Fgithub.com\u002FIBM\u002Fai-atlas-nexus) - An open-source Python toolkit from IBM Research that uses knowledge graphs to unify AI risk taxonomies (NIST AI RMF, OWASP LLM Top 10, EU AI Act, MIT AI Risk Repository) for risk identification, prioritization, and compliance automation. Listed in the [OECD AI Catalogue](https:\u002F\u002Foecd.ai\u002Fen\u002Fcatalogue\u002Ftools\u002Frisk-atlas-nexus) and part of the [AI Alliance](https:\u002F\u002Fthealliance.ai\u002F) Trust & Safety initiative.\n* [AIR Blackbox](https:\u002F\u002Fgithub.com\u002Fairblackbox\u002Fair-blackbox-mcp) - Open-source EU AI Act compliance scanner and trust layer for Python AI agents. Checks code against Articles 9, 10, 11, 12, 14, and 15 with framework-specific integrations for LangChain, CrewAI, AutoGen, OpenAI, and Anthropic SDKs. Includes HMAC-SHA256 audit chains, PII tokenization, consent gating, and prompt injection detection. Apache 2.0 licensed.\n* [Alibi](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi) - An open-source Python library for machine learning model inspection and interpretation.\n* [Awesome Machine Learning Production List](https:\u002F\u002Fgithub.com\u002Fethicalml\u002Fawesome-production-machine-learning) - A list of tools and frameworks that support the design, development and operation of production machine learning systems, currently maintained by The Institute for Ethical AI & Machine Learning.\n* [Cape Python](https:\u002F\u002Fgithub.com\u002Fcapeprivacy\u002Fcape-python) - Easily apply privacy-enhancing techniques for data science and machine learning tasks in Pandas and Spark. Can be used in conjunction with [Cape Core](https:\u002F\u002Fgithub.com\u002Fcapeprivacy\u002Fcape) to collaborate on privacy policies and distribute those policies for data projects across teams and organizations.\n* [eXplainability Toolbox](https:\u002F\u002Fethical.institute\u002Fxai.html) - The Institute for Ethical AI & Machine Learning proposal for an extended version of the traditional data science process which focuses on algorithmic bias and explainability, to ensure a baseline of risks around undesired biases can be mitigated.\n* [FAT Forensics](https:\u002F\u002Ffat-forensics.org\u002F) - A Python toolkit for evaluating Fairness, Accountability and Transparency of Artificial Intelligence systems. It is built on top of SciPy and NumPy, and distributed under the 3-Clause BSD license (new BSD).\n* [IBM's AI Explainability 360 Open Source Toolkit](http:\u002F\u002Faix360.mybluemix.net\u002F) - This is IBM's toolkit that includes large number of examples, research papers and demos implementing several algorithms that provide insights on fairness in machine learning systems.\n* [Linux Foundation AI Landscape](https:\u002F\u002Flandscape.lfai.foundation\u002F) - The official list of tools in the AI landscape curated by the Linux Foundation, which contains well maintained and used tools and frameworks.\n* [Microsoft Fairlearn](https:\u002F\u002Ffairlearn.org\u002F) - An open-source toolkit for assessing and improving fairness in machine learning products developed by Microsoft\n* [Microsoft Interpret ML](https:\u002F\u002Finterpret.ml\u002F) - An open-source toolkit for improving explainability\u002Finterpretability developed by Microsoft\n* [Open Ethics Data Passport](https:\u002F\u002Fopenethics.ai\u002Foedp\u002F) - The Open Ethics Data Passport ([OEDP](https:\u002F\u002Fgithub.com\u002FOpenEthicsAI\u002FOEDP)) is focused on describing the origins of the models by sheding light on how the training datasets were collected, cleaned, labeled, and used for training of the AI models.\n* [Open Ethics Label: AI nutrition labels](https:\u002F\u002Fopenethics.ai\u002Flabel\u002F) - Open Ethics Label ([OEL](https:\u002F\u002Fgithub.com\u002FOpenEthicsAI\u002FOEL)) is a \"nutrition label\" for digital products, including AI systems. It brings standard of self-disclosure to digital products to make these products transparent and safe for consumers.\n* [Orchard Kit](https:\u002F\u002Fgithub.com\u002FOrchardHarmonics\u002Forchard-kit) - Alignment, safety & cognitive architecture for autonomous AI agents. Membrane security, epistemic tagger, self-audit, agent discovery, cognitive architecture, collective cognition. Zero dependencies.\n* [Taking action on digital ethics](https:\u002F\u002Fwww.avanade.com\u002Fen-us\u002Fthinking\u002Fresearch-and-insights\u002Ftrendlines\u002Fdigital-ethics) - from Avanade\n\n# Online Courses and Learning Resources\n\n* [Bias and Discrimination in AI](https:\u002F\u002Fwww.edx.org\u002Fcourse\u002Fbias-and-discrimination-in-ai) - Free course by the University of Montreal and IVADO (via edX) about the discriminatory effects of algorithmic decision-making and responsible machine learning (institutional and technical strategies to identify and address bias).\n* [Data science ethics](https:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fdata-science-ethics) - Free course by Prof. Jagadish from the University of Michigan (via Coursera) that covers data ownership, privacy and anonymity, data validity, and algorithmic fairness.\n* [Intro to AI ethics](https:\u002F\u002Fwww.kaggle.com\u002Flearn\u002Fintro-to-ai-ethics) - Free course by Kaggle introducing the base concepts of ethics in AI and how to mitigate related problems.\n* [Introduction to AI Safety, Ethics, and Society](https:\u002F\u002Fwww.aisafetybook.com\u002F) - Developed by Dan Hendrycks, director of the [Center for AI Safety](https:\u002F\u002Fwww.safe.ai\u002F), this free online textbook aims to provide an accessible introduction to students, practitioners and others looking to better understand issues pertaining to AI Safety and Ethics. Apart from online reading, this book is available as a PDF [here](https:\u002F\u002Fwww.aisafetybook.com\u002Fdownload) and also as a free virtual course [here](https:\u002F\u002Fwww.aisafetybook.com\u002Fvirtual-course).\n* [Practical Data Ethics](https:\u002F\u002Fethics.fast.ai\u002F) - Free course by Rachel Thomas from the University of San Francisco Data Institute (via fast.ai) that covers disinformation, bias and fairness, foundations of ethics, privacy and surveillance, algorithmic colonialism\n* [Udacity's Secure & Private AI Course](https:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fsecure-and-private-ai--ud185) - Free course by Udacity which introduces three cutting-edge technologies for privacy-preserving AI: Federated Learning, Differential Privacy, and Encrypted Computation.\n\n# Processes and Checklists\n\n* [AI RFX Procurement Framework](https:\u002F\u002Fethical.institute\u002Frfx.html) - A procurement framework for evaluating the maturity of machine learning systems put together by cross functional teams of academics, industry practitioners and technical individuals at The Institute for Ethical AI & Machine Learning to empower industry practitioners looking to procure machine learning suppliers.\n* [Checklist for data science projects](http:\u002F\u002Fdeon.drivendata.org\u002F) - Deon by DrivenData is a command line tool that allows you to easily add an ethics checklist to your data science projects.\n* [Designing Ethical AI Experiences Checklist and Agreement](https:\u002F\u002Fresources.sei.cmu.edu\u002Flibrary\u002Fasset-view.cfm?assetid=636620) - document to guide the development of accountable, de-risked, respectful, secure, honest, and usable artificial intelligence (AI) systems with a diverse team aligned on shared ethics. Carnegie Mellon University, Software Engineering Institute.\n* [Ethical OS Toolkit](https:\u002F\u002Fethicalos.org\u002F) - A toolkit that dives into 8 risk zones to assess the potential challenges that a technology team may face, together with 14 scenarios to provide examples, and 7 future-proofing strategies to help take ethical action.\n* [Ethics Canvas](https:\u002F\u002Fwww.ethicscanvas.org\u002Findex.html) - A resource inspired by the traditional business canvas, which provides an interactive way to brainstorm potential risks, opportunities and solutions to ethical challenges that may be faced in a project using post-it note-like approach.\n* [Kat Zhou's Design Ethically Resources](https:\u002F\u002Fwww.designethically.com\u002Ftoolkit) - A set of workshops that can be organised across teams to identify challenges, assess current risks and take action on potential issues around ethical challenges that may be faced.\n* [Machine Learning Assurance](https:\u002F\u002Fmonitaur.ai\u002Fblog\u002Fmachine-learning-assurance\u002F) - Quick look at machine learning assurance: process of recording, understanding, verifying, and auditing machine learning models and their transactions.\n* [Markula Center's Ethical Toolkit for Engineering\u002FDesign Practice](https:\u002F\u002Fwww.scu.edu\u002Fethics-in-technology-practice\u002Fethical-toolkit\u002F) - A practical and comprehensible toolkit with seven components to aid practitioners reflect, and judge the moral grounds in which they are operating.\n* [Microsoft AI Fairness Checklist](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fproject\u002Fai-fairness-checklist\u002F)\n* [ODEP's Checklist for Employers: Facilitating the Hiring of People with Disabilities Through the Use of eRecruiting Screening Systems, Including AI](https:\u002F\u002Fwww.peatworks.org\u002Fwp-content\u002Fuploads\u002F2020\u002F10\u002FEARN_PEAT_eRecruiting_Checklist.pdf) - The Employer Assistance and Resource Network on Disability Inclusion (EARN) and the Partnership on Employment & Accessible Technology (PEAT), which are both funded through the U.S. Department of Labor's Office of Disability Employment Policy (ODEP), collaborated on an inclusive AI checklist for employers. The checklist provides direction for leadership, human resources personnel, equal employment opportunity managers, and procurement officers for reviewing AI tools used in recruiting and candidate assessment for fairness and inclusion of individuals with disabilities.\n* [Open Ethics Maturity Model (OEMM)](https:\u002F\u002Fopenethics.ai\u002Foemm\u002F) - The Open Ethics Maturity Model (OEMM) is a five-level framework. It embarks the organization on a journey toward transparent governance of AI and autonomous systems.\n* [San Francisco City's Ethics & Algorithms Toolkit](https:\u002F\u002Fethicstoolkit.ai\u002F) - A risk management framework for government leaders and staff who work with algorithms, providing a two part assessment process including an algorithmic assessment process, and a process to address the risks.\n* [UK Government's Data Ethics Workbook](https:\u002F\u002Fwww.gov.uk\u002Fgovernment\u002Fpublications\u002Fdata-ethics-workbook\u002Fdata-ethics-workbook) - A resource put together by the Department for Digital, Culture, Media and Sport (DCMS) which provides a set of questions that can be asked by practitioners in the public sector, which address each of the principles in their [Data Ethics Framework Principles](https:\u002F\u002Fwww.gov.uk\u002Fgovernment\u002Fpublications\u002Fdata-ethics-framework\u002Fdata-ethics-framework).\n* [US NIST AI Risk Management Framework](https:\u002F\u002Fwww.nist.gov\u002Fitl\u002Fai-risk-management-framework) - The Framework is intended to help developers, users and evaluators of AI systems better manage AI risks which could affect individuals, organizations, society, or the environment.\n* [World Economic Forum's Guidelines for Procurement](https:\u002F\u002Fwww.weforum.org\u002Fpress\u002F2019\u002F09\u002Fuk-government-first-to-pilot-ai-procurement-guidelines-co-designed-with-world-economic-forum\u002F) - The WEF has put together a set of guidelines for governments to be able to safely and reliably procure machine learning related systems, which has been trialled with the UK government.\n\n# Research and Industry Newsletters\n\n* [AI Safety Newsletter](https:\u002F\u002Fnewsletter.safe.ai\u002F) - A weekly newsletter from the Center for AI Safety providing updates on AI research, policy, and other areas for a non-technical audience.\n* [Import AI](https:\u002F\u002Fjack-clark.net\u002F) - A newsletter curated by OpenAI's Jack Clark which curates the most resent and relevant AI research, as well as relevant societal issues that intersect with technical AI research.\n* [Matt's thoughts in between](https:\u002F\u002Fwww.getrevue.co\u002Fprofile\u002Fmattclifford) - Newsletter curated by Entrepreneur First CEO Matt Clifford that provides a curated critical analysis on topics surrounding geopolitics, deep tech startups, economics and beyond.\n* [ML Safety Newsletter](https:\u002F\u002Fnewsletter.mlsafety.org\u002F) - A newsletter from the Center for AI Safety providing occasional deep-dives on key results in technical AI research.\n* [Montreal AI Ethics Institute Weekly AI Ethics newsletter](https:\u002F\u002Faiethics.substack.com) - A weekly newsletter curated by [Abhishek Gupta](https:\u002F\u002Fatg-abhishek.github.io) and his team at the [Montreal AI Ethics Institute](https:\u002F\u002Fmontrealethics.ai) that presents accessible summaries of technical and academic research papers along with commentary on the latest in the domain of AI ethics.\n* [The Machine Learning Engineer](https:\u002F\u002Fethical.institute\u002Fmle.html) - A newsletter curated by The Institute for Ethical AI & Machine Learning that contains curated articles, tutorials and blog posts from experienced Machine Learning professionals and includes insights on best practices, tools and techniques in machine learning explainability, reproducibility, model evaluation, feature analysis and beyond.\n","[![Awesome](images\u002Fawesome.svg)](https:\u002F\u002Fgithub.com\u002Fsindresorhus\u002Fawesome)\n[![维护中](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FMaintained%3F-YES-green.svg)](https:\u002F\u002Fgithub.com\u002FEthicalML\u002Fawesome-artificial-intelligence-guidelines\u002Fgraphs\u002Fcommit-activity)\n![GitHub](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FRelease-PROD-yellow.svg)\n![GitHub](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLanguages-MULTI-blue.svg)\n![GitHub](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FLicense-MIT-lightgrey.svg)\n[![GitHub](https:\u002F\u002Fimg.shields.io\u002Ftwitter\u002Ffollow\u002Faxsaucedo.svg?label=Follow)](https:\u002F\u002Ftwitter.com\u002FAxSaucedo\u002F)\n\n\n\u003Ctable>\n\u003Ctr>\n\u003Ctd width=\"60%\">\n\u003Ch1>优秀的人工智能监管、原则与指南\u003C\u002Fh1>\n\u003C\u002Ftd>\n\u003Ctd>\n\u003Ca href=\"https:\u002F\u002Fyoutu.be\u002FdKjCWfuvYxQ?t=147\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEthicalML_awesome-artificial-intelligence-regulation_readme_e809e9f2ed68.gif\">\u003C\u002Fa> \u003Cbr> （又名：负责任地编写人工智能）\n\u003C\u002Ftd>\n\u003C\u002Ftd>\n\u003C\u002Ftable>\n\n## 概述\n\n随着人工智能系统在社会中的普及，我们面临着日益严峻的社会挑战。由于许多挑战尚属首次出现，从业者将不得不应对复杂的伦理和社会问题。\n\n目前已有大量内容通过“原则”、“伦理框架”、“检查清单”等形式探讨这些问题。然而，在众多资源中找到合适的内容并不容易。\n\n本仓库旨在简化这一过程，通过梳理围绕人工智能制定的各类指南、原则、道德规范、标准及监管措施，帮助用户更好地理解和应用这些资源。\n\n## 本页各部分快速链接\n\n### 按经济区域划分的国家监管\n\n| | | | |\n|-|-|-|-|\n|[奥地利 🇦🇹](#austria)|[巴西 🇧🇷](#brazil)|[加拿大 🇨🇦](#canada)|[中国 🇨🇳](#china)|\n|[迪拜 🇦🇪](#dubai)|[欧盟 🇪🇺](#european-union)|[印度 🇮🇳](#india)|[爱尔兰 🇮🇪](#ireland)|\n|[以色列 🇮🇱](#israel)|[墨西哥 🇲🇽](#mexico)|[新加坡 🇸🇬](#singapore)|[瑞士 🇨🇭](#switzerland)|\n|[阿联酋 🇦🇪](#united-arab-emirates)|[英国 🇬🇧](#united-kingdom)|[美国 🇺🇸](#united-states-of-america)|\n\n### 其他部分\n\n| | | |\n|-|-|-|\n|[🔍 高层次框架与原则](#high-level-frameworks-and-principles) |[📜 行业标准倡议](#industry-standards-initiatives)| [🔨 互动式实用工具](#interactive-and-practical-tools)|\n|[📚 在线课程](#online-courses-and-learning-resources)|[🔏 流程与检查清单](#processes-and-checklists)|[🤖 研究与行业通讯](#research-and-industry-newsletters)|\n\n## 其他相关资源\n\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd width=\"30%\">\n         您可以订阅\u003Cahref=\"https:\u002F\u002Fethical.institute\u002Fmle.html\">机器学习工程师\u003C\u002Fa>通讯。您将收到由机器学习专业人士精选的开源框架、教程和文章更新。\n    \u003C\u002Ftd>\n    \u003Ctd width=\"70%\">\n        \u003Ca href=\"https:\u002F\u002Fethical.institute\u002Fmle.html\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEthicalML_awesome-artificial-intelligence-regulation_readme_bf9a0a64abe8.png\">\u003C\u002Fa>\n    \u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n\n# 监管与政策\n\n## 奥地利\n\n* [奥地利2030年人工智能使命](https:\u002F\u002Fwww.bmk.gv.at\u002Fthemen\u002Finnovation\u002Fpublikationen\u002Fikt\u002Fai\u002Faimat.html) - 塑造奥地利人工智能的未来。奥地利创新与技术部发布了其至2030年的AI愿景。\n\n## 巴西\n\n* [巴西人工智能监管（PL 2338\u002F2023）](https:\u002F\u002Fwww25.senado.leg.br\u002Fweb\u002Fatividade\u002Fmaterias\u002F-\u002Fmateria\u002F157233) - 巴西一项旨在建立全面的人工智能开发与使用框架的提案法案，强调透明度、问责制以及与国际标准的一致性。\n\n## 加拿大\n\n* [人工智能与数据法案（AIDA）](https:\u002F\u002Fised-isde.canada.ca\u002Fsite\u002Finnovation-better-canada\u002Fen\u002Fartificial-intelligence-and-data-act-aida-companion-document) - 为颁布《消费者隐私保护法》、《个人信息与数据保护法庭法》以及《人工智能与数据法案》，并对其他法律作出相应及关联修订而制定的法案。\n\n## 中国\n\n* [北京人工智能原则](https:\u002F\u002Fwww.baai.ac.cn\u002Fblog\u002Fbeijing-ai-principles) - 一项关于人工智能研究、开发、应用、治理及长期规划的倡议，呼吁促进人工智能健康发展，以支持构建人类命运共同体，实现人工智能造福人类与自然。\n* [中国网络安全法](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FChina_Internet_Security_Law) - 中国为提升网络安全和国家安全、维护网络空间主权与公共利益、保护公民、法人及其他组织的合法权益、促进经济社会健康发展而制定的法律（中国工业和信息化部认为，该法为自1999年以来中国坚持实施的“走出去”战略提供了正当依据）。[毕马威关于网络安全法的摘要](https:\u002F\u002Fassets.kpmg\u002Fcontent\u002Fdam\u002Fkpmg\u002Fcn\u002Fpdf\u002Fen\u002F2017\u002F02\u002Foverview-of-cybersecurity-law.pdf)。战略与国际问题研究中心[对中国新数据隐私法的概述](https:\u002F\u002Fwww.csis.org\u002Fanalysis\u002Fnew-china-data-privacy-standard-looks-more-far-reaching-gdpr)。\n* [中华人民共和国微博信息服务管理规定](http:\u002F\u002Fen.pkulaw.cn\u002Fdisplay.aspx?cgid=309714&lib=law) - 中国要求微博网站（社交媒体平台）依法取得相关资质、核实用户真实身份、建立辟谣机制等的规定。[规则摘要](https:\u002F\u002Fwww.loc.gov\u002Flaw\u002Fforeign-news\u002Farticle\u002Fchina-rules-regulating-microblogs-issued\u002F)由美国国会图书馆提供。\n* [生成式人工智能服务管理暂行办法](https:\u002F\u002Fwww.cac.gov.cn\u002F2023-07\u002F13\u002Fc_1690898327029107.htm) - 中国首部针对生成式人工智能服务的行政法规，于2023年8月15日起施行。\n* [个人信息安全规范（译文）](https:\u002F\u002Fwww.newamerica.org\u002Fcybersecurity-initiative\u002Fdigichina\u002Fblog\u002Ftranslation-chinas-personal-information-security-specification\u002F) - 中国政府于2018年5月生效的第一项重要数字隐私规则，详细规定了同意机制以及个人数据的收集、使用和共享方式。战略与国际问题研究中心[对该规范的概述](https:\u002F\u002Fwww.csis.org\u002Fanalysis\u002Fchinas-emerging-data-privacy-system-and-gdpr)。\n* [关于加强网络信息保护的决定](https:\u002F\u002Fwww.globalprivacyblog.com\u002Fprivacy\u002Fchinas-legislature-adopts-decision-on-strengthening-the-protection-of-online-information\u002F) - 中华人民共和国全国人民代表大会常务委员会通过了《关于加强网络信息保护的决定》——这是一项包含12条条款的法律，适用于公私部门实体在互联网上收集和处理电子个人信息的行为。\n* [个人资料保护法](https:\u002F\u002Flaw.moj.gov.tw\u002FENG\u002FLawClass\u002FLawAll.aspx?pcode=I0050021) - 中华民国的个人资料保护法，旨在规范个人资料的收集、处理和使用，以防止对人格权的侵害，并促进个人资料的合理利用。\n\n## 迪拜\n\n* [智慧迪拜人工智能原则与伦理——伦理AI工具包](https:\u002F\u002Fwww.smartdubai.ae\u002Finitiatives\u002Fai-principles-ethics) - 旨在为整个城市生态系统提供实用帮助。该工具包支持业界、学术界及个人理解如何负责任地使用AI系统，包含一系列原则与指南，以及供开发者评估其平台的自我评估工具。\n\n## 欧盟\n\n* [值得信赖的人工智能伦理指南](https:\u002F\u002Fec.europa.eu\u002Ffuturium\u002Fen\u002Fai-alliance-consultation) - 由欧盟人工智能高级别专家组（AI HLEG）编制的欧盟委员会文件。\n* [欧盟人工智能法案](https:\u002F\u002Fartificialintelligenceact.eu\u002F) - 欧盟人工智能法案是规范欧盟境内人工智能应用的法律框架；这是首个针对人工智能制定的监管法规（[概述](https:\u002F\u002Fwww.europarl.europa.eu\u002Ftopics\u002Fen\u002Farticle\u002F20230601STO93804\u002Feu-ai-act-first-regulation-on-artificial-intelligence)）。\n* [GDPR.EU指南](https:\u002F\u002Fgdpr.eu\u002F) - 由欧盟“地平线2020”框架计划共同资助的项目，为研究GDPR的组织和个人提供资源，包括一个简洁且最新的信息库，帮助各组织实现GDPR合规（[法律文本](https:\u002F\u002Fwww.govinfo.gov\u002Fcontent\u002Fpkg\u002FUSCODE-2012-title5\u002Fpdf\u002FUSCODE-2012-title5-partI-chap5-subchapII-sec552a.pdf)）。\n* [通用数据保护条例GDPR](https:\u002F\u002Feur-lex.europa.eu\u002Flegal-content\u002FEN\u002FTXT\u002F?uri=celex%3A32016R0679) - 欧洲议会与欧洲理事会于2016年4月27日颁布的关于保护自然人个人数据处理及该等数据自由流动，并废除95\u002F46\u002FEC指令的欧盟GDPR法规2016\u002F679的法律文本。\n\n## 印度\n\n* [2026年信息技术修正规则](https:\u002F\u002Fwww.meity.gov.in\u002Fstatic\u002Fuploads\u002F2026\u002F02\u002F550681ab908f8afb135b0ad42816a1c9.pdf) - 这些是更新后的规则，旨在修订2021年信息技术规则，以进一步强化中介机构的尽职调查框架，特别是在合成生成信息（SGI）及相关网络危害方面。常见问题解答[在此](https:\u002F\u002Fwww.meity.gov.in\u002Fstatic\u002Fuploads\u002F2025\u002F10\u002F065b6deb585441b5ccdf8be42502a49c.pdf)。\n* [数字个人数据保护法（DPDP）](https:\u002F\u002Fwww.meity.gov.in\u002Fwritereaddata\u002Ffiles\u002FDigital%20Personal%20Data%20Protection%20Act%202023.pdf) - 印度的DPDP法旨在规范数字个人数据的处理方式，既保障个人对其数据的保护权，又满足为合法目的及与其相关或附带事项而处理此类数据的需求。该法案于2023年8月在印度议会通过，目前由[MeitY](https:\u002F\u002Fwww.meity.gov.in\u002F)负责推动实施，并已于2025年1月3日发布了基于该法案的规则草案，供公众咨询。DPDP规则草案可供公众审阅并提出反馈[在此](https:\u002F\u002Finnovateindia.mygov.in\u002Fdpdp-rules-2025\u002F)。\n* [国家人工智能战略](https:\u002F\u002Fwww.niti.gov.in\u002Fsites\u002Fdefault\u002Ffiles\u002F2023-03\u002FNational-Strategy-for-Artificial-Intelligence.pdf) - 本文件着重探讨印度如何利用变革性技术，按照政府的发展理念实现社会包容性增长。此外，印度还应努力将这些解决方案推广至其他具有相似条件的发展中国家。\n* [负责任的人工智能原则](https:\u002F\u002Fwww.niti.gov.in\u002Fsites\u002Fdefault\u002Ffiles\u002F2021-02\u002FResponsible-AI-22022021.pdf) - 该文件综合了跨部门磋商、大规模全球多方利益相关者磋商、与印度及全球人工智能伦理专家的一对一会谈，以及过去15个月间开展的广泛公众咨询所收集的见解、反馈和经验。本文旨在为人工智能生态系统提供一份重要路线图，鼓励印度以负责任的方式采用人工智能技术，建立公众对该技术的信任，并将“人工智能惠及全民”的理念置于核心位置。\n\n## 爱尔兰\n\n* [爱尔兰国家人工智能战略](https:\u002F\u002Fassets.gov.ie\u002Fstatic\u002Fdocuments\u002F5e511b3a\u002FNational_Digital_and_AI_Strategy_2030.pdf) - 爱尔兰于2026年发布其国家人工智能战略，重点解决人工智能素养、基础设施、应用推广等问题。\n\n## 以色列\n\n* [人工智能领域负责任发展的政策原则](https:\u002F\u002Fwww.gov.il\u002Fen\u002Fpages\u002Fmost-news20221117) - 该人工智能监管与伦理政策草案强调“负责任创新”，旨在确保行业进步的同时维护公共利益。\n\n## 墨西哥\n\n* [墨西哥国家人工智能议程提案](https:\u002F\u002Fwww.ania.org.mx\u002F_files\u002Fugd\u002F447d95_c7e6ebee6cf44b38a0d386cc9534f6e5.pdf) - 提出多方面的承诺建议，以指导墨西哥以合乎伦理和负责任的方式开发及使用人工智能。\n\n## 新加坡\n\n* [2012年个人数据保护法](https:\u002F\u002Fsso.agc.gov.sg\u002FAct\u002FPDPA2012) - 《2012年个人数据保护法》（简称“该法”）规定了新加坡的数据保护法律。除建立一般性的数据保护制度外，该法还对电话营销行为进行了规范。\n* [2019年防止在线虚假信息与操纵法](https:\u002F\u002Fsso.agc.gov.sg\u002FActs-Supp\u002F18-2019\u002FPublished\u002F20190625?DocDate=20190625) - 该法旨在防止在新加坡通过电子通信传播虚假事实陈述，压制对此类传播的支持并抵消其影响，防范在线账户被用于此类传播及信息操纵，同时授权采取措施提高在线政治广告的透明度，并处理相关事宜。\n\n## 瑞士\n\n* [新联邦数据保护法（nFADP）](https:\u002F\u002Fwww.fedlex.admin.ch\u002Feli\u002Fcc\u002F2022\u002F491\u002Fen) - 新联邦数据保护法于2020年在瑞士议会通过，并于2023年9月1日起生效。该法旨在改善个人数据处理流程，赋予相关人员新的权利，同时也对企业施加了一系列义务。任何向瑞士公民提供商品和服务并处理其个人数据的企业，无论是否位于瑞士境内，均需遵守nFADP的规定。更多关于nFADP的亮点信息可参见[此处](https:\u002F\u002Fwww.kmu.admin.ch\u002Fkmu\u002Fen\u002Fhome\u002Ffacts-and-trends\u002Fdigitization\u002Fdata-protection\u002Fnew-federal-act-on-data-protection-nfadp.html)。\n\n## 阿拉伯联合酋长国\n\n* [阿联酋国家人工智能战略](https:\u002F\u002Fai.gov.ae\u002Fwp-content\u002Fuploads\u002F2021\u002F07\u002FUAE-National-Strategy-for-Artificial-Intelligence-2031.pdf) - 本文件阐述了阿联酋致力于在政府各部门快速采用新兴人工智能技术的愿景，同时吸引顶尖人工智能人才，在先进且安全的生态系统中开展新技术试验，以解决复杂问题。\n\n## 英国\n\n* [信息专员办公室数据保护指南](https:\u002F\u002Fico.org.uk\u002Ffor-organisations\u002Fguide-to-data-protection\u002F) - 本指南面向数据保护官及其他日常负责数据保护工作的人员。它主要针对中小型企业，但也可能对大型企业有所帮助。\n* [2018年英国数据保护法](http:\u002F\u002Fwww.legislation.gov.uk\u002Fukpga\u002F2018\u002F12\u002Fcontents\u002Fenacted) - 该法将欧盟通用数据保护条例（GDPR）纳入英国法律体系，但在实施过程中根据GDPR的授权引入了若干“例外条款”，导致一些关键差异（尽管较小，但并非无关紧要，尤其是在脱欧之后可能产生更大影响）。\n* [英国人工智能监管：支持创新的方法](https:\u002F\u002Fwww.gov.uk\u002Fgovernment\u002Fpublications\u002Fai-regulation-a-pro-innovation-approach) - 本白皮书详细介绍了英国实施支持创新的人工智能监管方针的计划。\n\n## 美国\n\n* [加州消费者隐私法案（CCPA）](http:\u002F\u002Fleginfo.legislature.ca.gov\u002Ffaces\u002FbillCompareClient.xhtml?bill_id=201720180AB375) - 加州消费者隐私法案的法律文本\n* [美国国防部人工智能伦理原则](https:\u002F\u002Fwww.diu.mil\u002Fresponsible-ai-guidelines) - 美国国防部为技术承包商制定的人工智能负责任使用指南。该指南提供了一个在技术生命周期的规划、开发和部署阶段应遵循的分步流程。\n* [欧盟—美国及瑞士—美国隐私盾框架](https:\u002F\u002Fwww.privacyshield.gov\u002Fwelcome) - 欧盟—美国及瑞士—美国隐私盾框架由美国商务部、欧盟委员会和瑞士政府共同设计，旨在为大西洋两岸的企业提供一种机制，使其在将个人数据从欧盟和瑞士传输至美国以支持跨大西洋商业活动时，能够符合数据保护要求。\n* [关于保持美国人工智能领导地位的行政命令](https:\u002F\u002Fwww.whitehouse.gov\u002Fpresidential-actions\u002Fexecutive-order-maintaining-american-leadership-artificial-intelligence\u002F) - 美国总统发布的正式指令，旨在\n* [2018年公平信用报告法](https:\u002F\u002Fwww.ftc.gov\u002Fenforcement\u002Fstatutes\u002Ffair-credit-reporting-act) - 《公平信用报告法》是一项联邦法律，用于规范消费者信用信息的收集及其信用报告的访问权限。\n* [格雷姆—里奇—比利法案（适用于金融机构）](https:\u002F\u002Fwww.ftc.gov\u002Ftips-advice\u002Fbusiness-center\u002Fprivacy-and-security\u002Fgramm-leach-bliley-act) - 格雷姆—里奇—比利法案要求金融机构（即向消费者提供贷款、财务或投资建议、保险等金融产品或服务的公司）向客户说明其信息共享做法，并采取措施保护敏感数据。\n* [1996年健康保险可携性和责任法案](https:\u002F\u002Fwww.hhs.gov\u002Fhipaa\u002Ffor-professionals\u002Fsecurity\u002Flaws-regulations\u002Findex.html) - HIPAA要求美国卫生与公众服务部（HHS）部长制定保护特定健康信息隐私和安全的法规，随后HHS发布了所谓的HIPAA[隐私规则](https:\u002F\u002Fwww.hhs.gov\u002Fhipaa\u002Ffor-professionals\u002Fprivacy\u002Findex.html)和HIPAA[安全规则](https:\u002F\u002Fwww.hhs.gov\u002Fhipaa\u002Ffor-professionals\u002Fsecurity\u002Findex.html)。\n* [1974年隐私法](https:\u002F\u002Fwww.justice.gov\u002Fopcl\u002Fprivacy-act-1974) - 1974年隐私法确立了一套公平信息实践准则，用于规范联邦机构在其记录系统中收集、维护、使用和传播有关个人的信息。\n* [1980年隐私保护法](https:\u002F\u002Fepic.org\u002Fprivacy\u002Fppa\u002F) - 1980年隐私保护法保护新闻工作者免于在作品公开发表前被执法机构要求交出任何工作成果和文件资料，包括消息来源。\n* [白宫关于人工智能的行政命令](https:\u002F\u002Fwww.whitehouse.gov\u002Fbriefing-room\u002Fpresidential-actions\u002F2023\u002F10\u002F30\u002Fexecutive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence\u002F) - 美国关于安全、可靠且值得信赖的人工智能开发与使用的行政命令\n\n# 高级别框架与原则\n\n* [负责任机器学习的8项原则（AI与机器学习）](https:\u002F\u002Fethical.institute\u002Fprinciples.html) - 负责任人工智能与机器学习研究所制定了8项负责任机器学习的原则，供设计、构建和运营机器学习系统的个人及团队采纳。\n* [新西兰算法宪章](https:\u002F\u002Fdata.govt.nz\u002Fuse-data\u002Fdata-ethics\u002Fgovernment-algorithm-transparency-and-accountability\u002Falgorithm-charter) - 新西兰算法宪章是一项不断发展的文件，需根据新兴技术的变化作出响应，同时也要适合政府机构的实际需求。\n* [指南评估——伦理的伦理](https:\u002F\u002Farxiv.org\u002Fftp\u002Farxiv\u002Fpapers\u002F1903\u002F1903.03425.pdf) - 一篇分析多种伦理原则的研究论文。\n* [计算机协会伦理与职业行为准则](https:\u002F\u002Fwww.acm.org\u002Fcode-of-ethics) - 该准则由计算机协会于1992年制定，并于2018年更新。其旨在激励和指导所有计算专业人士的道德行为，包括现任及未来的从业者、教师、学生、意见领袖，以及以有影响力的方式使用计算技术的任何人。此外，当发生违规行为时，该准则也可作为纠正措施的基础。准则中的各项原则以责任声明的形式呈现，基于公共利益始终是首要考虑这一理念。\n* [边界与同意参考框架](https:\u002F\u002Fshadow-hickory-d63.notion.site\u002FBoundary-Consent-Reference-Frameworks-2f99b44ae6578072992cf2725684bcf0?source=copy_link) - 仅供参考。定义了AI系统的非执行判断边界及同意状态控制机制。\n* [负责任与智能数据实践宣言](https:\u002F\u002Fwww.declaration.org.uk\u002F) - 由[曼彻斯特开放数据组织](https:\u002F\u002Fwww.opendatamanchester.org.uk\u002F)提出的关于数据最佳实践的共同愿景。\n* [欧盟委员会可信人工智能伦理指南](https:\u002F\u002Fdigital-strategy.ec.europa.eu\u002Fen\u002Flibrary\u002Fethics-guidelines-trustworthy-ai) - 可信人工智能伦理指南是由人工智能高级别专家组（AI HLEG）编制的文件。该独立专家组于2018年6月由欧盟委员会成立，作为当年早些时候宣布的人工智能战略的一部分。\n* [从理念到实践：公开可用的人工智能伦理工具、方法与研究的初步审视——将原则转化为实践](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.06876) - 英国数字 catapult 发表的一篇论文，旨在识别并阐明原则与其实际应用之间的差距。\n* [IEEE的伦理对齐设计](https:\u002F\u002Fethicsinaction.ieee.org\u002F) - 一项以人工智能和自主系统优先保障人类福祉的愿景，鼓励技术人员在创建自主和智能技术时优先考虑伦理因素。\n* [蒙特利尔人工智能伦理研究所2020年6月人工智能伦理状况报告](https:\u002F\u002Fmontrealethics.ai\u002Fthe-state-of-ai-ethics-report-june-2020\u002F) - 由[蒙特利尔人工智能伦理研究所](https:\u002F\u002Fmontrealethics.ai)整理的一份资源，收录了2020年3月至6月期间人工智能伦理领域内最相关的研究与报告。\n* [蒙特利尔关于负责任发展人工智能的宣言](https:\u002F\u002Fwww.montrealdeclaration-responsibleai.com\u002Fthe-declaration) - 由蒙特利尔大学发起制定的伦理原则与价值观，旨在促进人类及群体的根本利益。\n* [牛津大学关于人工智能治理的建议](https:\u002F\u002Fwww.fhi.ox.ac.uk\u002Fwp-content\u002Fuploads\u002FStandards_-FHI-Technical-Report.pdf) - 来自牛津大学人类未来研究所的一组建议，重点关注高效设计、开发和研究人工智能标准所需的基础设施与特性。\n* [普华永道负责任的人工智能](https:\u002F\u002Fwww.pwc.com\u002Fgx\u002Fen\u002Fissues\u002Fdata-and-analytics\u002Fartificial-intelligence\u002Fwhat-is-responsible-ai.html) - 普华永道编制了一份调查问卷及一系列原则，概括了他们在负责任人工智能方面所识别出的关键领域。\n* [关于人工智能伦理的建议](https:\u002F\u002Funesdoc.unesco.org\u002Fark:\u002F48223\u002Fpf0000381137) - 联合国教科文组织的这项建议是一个全面的国际框架，旨在引导人工智能技术的发展与应用，并确立了一系列符合促进和保护人权、人类尊严及环境可持续性的价值观。该建议于2021年11月在教科文组织大会上经193个会员国一致通过。更多信息请参阅教科文组织2023年发布的关键事实资料[此处](https:\u002F\u002Fwww.unesco.org\u002Fen\u002Farticles\u002Funescos-recommendation-ethics-artificial-intelligence-key-facts)。\n* [SHELET协议](https:\u002F\u002Fmordechaipotash.github.io\u002Fpoly-wiki-public\u002F) - 一种用于人类监督AI系统的结构化治理循环（感知→假设→选择→执行→评估→优化），强调通过迭代压缩而非 rigid 控制来实现主权。[来源](https:\u002F\u002Fgithub.com\u002Fmordechaipotash\u002Fpoly-wiki-public)\n* [新加坡个人数据保护局人工智能治理原则](https:\u002F\u002Fwww.pdpc.gov.sg\u002Fhelp-and-resources\u002F2020\u002F01\u002Fmodel-ai-governance-framework) - 新加坡政府的个人数据保护局制定了一套关于数据保护及人类参与自动化系统的指导原则，并附有一份报告，详细阐述了这些[指导原则及其动机](https:\u002F\u002Fwww.pdpc.gov.sg\u002F-\u002Fmedia\u002FFiles\u002FPDPC\u002FPDF-Files\u002FResource-for-Organisation\u002FAI\u002FPrimer-for-2nd-edition-of-AI-Gov-Framework.pdf?la=en)。\n* [技术和组织最佳实践](https:\u002F\u002Fwww.fbpml.org\u002Fthe-best-practices\u002Fthe-best-practices) - 由[机器学习最佳实践基金会（FBPML）](https:\u002F\u002Fwww.fbpml.org\u002F)整理的一项资源，包含技术指南（如公平与非歧视、监控与维护、数据质量、产品可追溯性、可解释性）以及组织指南（如数据治理、产品管理、人力资源管理、合规与审计）。欢迎社区成员通过[FBPML维基](https:\u002F\u002Fwiki.fbpml.org\u002Fwiki\u002FMain_Page)贡献内容。\n* [多伦多宣言](https:\u002F\u002Fwww.accessnow.org\u002Fthe-toronto-declaration-protecting-the-rights-to-equality-and-non-discrimination-in-machine-learning-systems\u002F) - 由Access Now发布的宣言，旨在保护机器学习系统中的平等与非歧视权利。\n* [英国政府数据伦理框架原则](https:\u002F\u002Fwww.gov.uk\u002Fgovernment\u002Fpublications\u002Fdata-ethics-framework\u002Fdata-ethics-framework) - 由数字、文化、媒体和体育部（DCMS）编制的一份资源，概述了数据伦理，并提出了一个包含7项原则的框架。\n* [理解人工智能伦理与安全](https:\u002F\u002Fwww.turing.ac.uk\u002Fresearch\u002Fpublications\u002Funderstanding-artificial-intelligence-ethics-and-safety) - 由[艾伦·图灵研究所](https:\u002F\u002Fwww.turing.ac.uk\u002F)的戴维·莱斯利撰写的指南，旨在为公共部门中人工智能系统的负责任设计与实施提供指导。\n\n# 行业标准倡议\n\n* [ACS职业行为准则 - PDF](https:\u002F\u002Fwww.acs.org.au\u002Fcontent\u002Fdam\u002Facs\u002Frules-and-regulations\u002FCode-of-Professional-Conduct_v2.1.pdf) - 澳大利亚信息与通信技术行业专业组织。\n* [计算机协会伦理与职业行为准则](https:\u002F\u002Fwww.acm.org\u002Fcode-of-ethics) - 该准则由计算机协会于1992年制定，并于2018年更新。其旨在激励和指导所有计算专业人士的道德行为，包括现任及未来的从业者、教师、学生、意见领袖，以及以有影响力的方式使用计算技术的任何人。此外，当发生违规行为时，该准则也可作为纠正措施的基础。准则中的原则以责任声明的形式呈现，基于公共利益始终是首要考虑的原则。\n* [IEEE人工智能与自主系统伦理考量全球倡议](https:\u002F\u002Fethicsinaction.ieee.org\u002F) - IEEE批准的标准项目，专门聚焦于伦理对齐设计原则，包含14项（P700X）标准，涵盖从数据收集到隐私保护、算法偏见等多个主题。\n* [ISO\u002FIEC的人工智能标准](https:\u002F\u002Fwww.iso.org\u002Fcommittee\u002F6794475\u002Fx\u002Fcatalogue\u002F) - ISO的人工智能标准倡议，其中包括一系列后续标准，覆盖大数据、人工智能术语、机器学习框架等领域。\n* [开放伦理透明协议](https:\u002F\u002Fopenethics.ai\u002Foetp\u002F) - 开放伦理透明协议（[OETP](https:\u002F\u002Fgithub.com\u002FOpenEthicsAI\u002FOETP)或简称“协议”）描述了涉及自动化决策（ADM）的产品自愿性伦理信息披露的创建与交换机制，这些产品包括人工智能驱动的服务、机器人流程自动化（RPA）工具和机器人等。该协议旨在提高具有完全或部分自动化功能的数字产品的构建与部署过程的透明度。\n\n# 交互式实用工具\n\n* [Aequitas偏差与公平性审计工具包](http:\u002F\u002Faequitas.dssg.io\u002F) - 偏差报告由Aequitas提供支持，这是一款面向机器学习开发者、分析师和政策制定者的开源偏差审计工具包，用于审计机器学习模型中的歧视与偏差，并围绕预测性风险评估工具的开发与部署做出明智且公平的决策。\n* [AI Atlas Nexus](https:\u002F\u002Fgithub.com\u002FIBM\u002Fai-atlas-nexus) - IBM研究院推出的一款开源Python工具包，利用知识图谱统一整合AI风险分类体系（NIST AI RMF、OWASP LLM前10名、欧盟AI法案、MIT AI风险库），以实现风险识别、优先级排序及合规自动化。该工具被列入[OECD AI目录](https:\u002F\u002Foecd.ai\u002Fen\u002Fcatalogue\u002Ftools\u002Frisk-atlas-nexus)，并作为[AI联盟](https:\u002F\u002Fthealliance.ai\u002F)信任与安全倡议的一部分。\n* [AIR Blackbox](https:\u002F\u002Fgithub.com\u002Fairblackbox\u002Fair-blackbox-mcp) - 一款开源的欧盟AI法案合规扫描器及Python AI代理的信任层。它根据第9、10、11、12、14和15条法律条款检查代码，并针对LangChain、CrewAI、AutoGen、OpenAI和Anthropic SDK等框架提供特定集成。该工具包含HMAC-SHA256审计链、个人身份信息令牌化、同意门控机制以及提示注入检测功能。采用Apache 2.0许可证。\n* [Alibi](https:\u002F\u002Fgithub.com\u002FSeldonIO\u002Falibi) - 一个用于机器学习模型检查与解释的开源Python库。\n* [优秀机器学习生产工具列表](https:\u002F\u002Fgithub.com\u002Fethicalml\u002Fawesome-production-machine-learning) - 由伦理AI与机器学习研究所维护的，支持生产级机器学习系统设计、开发和运行的工具与框架列表。\n* [Cape Python](https:\u002F\u002Fgithub.com\u002Fcapeprivacy\u002Fcape-python) - 可轻松在Pandas和Spark中为数据科学与机器学习任务应用隐私增强技术。可与[Cape Core](https:\u002F\u002Fgithub.com\u002Fcapeprivacy\u002Fcape)配合使用，共同制定隐私政策，并将这些政策分发给跨团队和组织的数据项目。\n* [可解释性工具箱](https:\u002F\u002Fethical.institute\u002Fxai.html) - 伦理AI与机器学习研究所提出的一种传统数据科学流程的扩展版本，专注于算法偏见与可解释性，以确保能够缓解潜在的不良偏见风险。\n* [FAT取证](https:\u002F\u002Ffat-forensics.org\u002F) - 一款用于评估人工智能系统公平性、问责制和透明度的Python工具包。它基于SciPy和NumPy构建，并采用3-Clause BSD许可证（新BSD）发布。\n* [IBM AI可解释性360开源工具包](http:\u002F\u002Faix360.mybluemix.net\u002F) - 这是IBM提供的工具包，包含大量示例、研究论文和演示，展示了多种能够提供机器学习系统公平性洞察的算法。\n* [Linux基金会AI全景图](https:\u002F\u002Flandscape.lfai.foundation\u002F) - Linux基金会整理的官方AI领域工具清单，收录了许多维护良好且广泛使用的工具和框架。\n* [微软Fairlearn](https:\u002F\u002Ffairlearn.org\u002F) - 由微软开发的用于评估和提升机器学习产品公平性的开源工具包。\n* [微软Interpret ML](https:\u002F\u002Finterpret.ml\u002F) - 由微软开发的用于提升可解释性\u002F理解性的开源工具包。\n* [开放伦理数据护照](https:\u002F\u002Fopenethics.ai\u002Foedp\u002F) - 开放伦理数据护照（[OEDP](https:\u002F\u002Fgithub.com\u002FOpenEthicsAI\u002FOEDP)）旨在通过揭示训练数据集的收集、清洗、标注及用于AI模型训练的过程，来说明模型的来源。\n* [开放伦理标签：AI营养标签](https:\u002F\u002Fopenethics.ai\u002Flabel\u002F) - 开放伦理标签（[OEL](https:\u002F\u002Fgithub.com\u002FOpenEthicsAI\u002FOEL)）是针对包括AI系统在内的数字产品的“营养标签”。它为数字产品引入自我披露标准，使这些产品对消费者更加透明和安全。\n* [Orchard Kit](https:\u002F\u002Fgithub.com\u002FOrchardHarmonics\u002Forchard-kit) - 面向自主AI代理的一体化对齐、安全与认知架构工具包。具备膜安全防护、认识论标记、自我审计、代理发现、认知架构及集体认知等功能。无任何依赖项。\n* [行动起来：数字伦理](https:\u002F\u002Fwww.avanade.com\u002Fen-us\u002Fthinking\u002Fresearch-and-insights\u002Ftrendlines\u002Fdigital-ethics) - 来自Avanade\n\n# 在线课程与学习资源\n\n* [人工智能中的偏见与歧视](https:\u002F\u002Fwww.edx.org\u002Fcourse\u002Fbias-and-discrimination-in-ai) - 蒙特利尔大学和IVADO联合推出的免费课程（通过edX平台），探讨算法决策的歧视性影响及负责任的机器学习实践，包括识别和缓解偏见的机构与技术策略。\n* [数据科学伦理](https:\u002F\u002Fwww.coursera.org\u002Flearn\u002Fdata-science-ethics) - 密歇根大学Jagadish教授主讲的免费课程（通过Coursera平台），内容涵盖数据所有权、隐私与匿名性、数据有效性以及算法公平性。\n* [人工智能伦理入门](https:\u002F\u002Fwww.kaggle.com\u002Flearn\u002Fintro-to-ai-ethics) - Kaggle提供的免费课程，介绍人工智能伦理的基本概念及如何缓解相关问题。\n* [人工智能安全、伦理与社会导论](https:\u002F\u002Fwww.aisafetybook.com\u002F) - 由[人工智能安全中心](https:\u002F\u002Fwww.safe.ai\u002F)主任Dan Hendrycks编写，这本免费在线教材旨在为学生、从业者及其他希望更好地理解人工智能安全与伦理问题的人士提供易于理解的入门知识。除在线阅读外，本书还提供PDF版本[在此下载](https:\u002F\u002Fwww.aisafetybook.com\u002Fdownload)，并设有免费线上课程[在此参加](https:\u002F\u002Fwww.aisafetybook.com\u002Fvirtual-course)。\n* [实用数据伦理](https:\u002F\u002Fethics.fast.ai\u002F) - 旧金山大学数据研究所Rachel Thomas主讲的免费课程（通过fast.ai平台），内容涉及虚假信息、偏见与公平、伦理基础、隐私与监控、算法殖民主义等主题。\n* [Udacity安全与隐私保护的人工智能课程](https:\u002F\u002Fwww.udacity.com\u002Fcourse\u002Fsecure-and-private-ai--ud185) - Udacity提供的免费课程，介绍三种用于保护隐私的人工智能前沿技术：联邦学习、差分隐私和加密计算。\n\n# 流程与检查清单\n\n* [AI RFX采购框架](https:\u002F\u002Fethical.institute\u002Frfx.html) - 由伦理人工智能与机器学习研究所的跨学科团队（包括学者、行业从业者和技术专家）共同制定的采购框架，旨在帮助行业从业者评估机器学习系统的成熟度，从而更有效地选择合适的机器学习供应商。\n* [数据科学项目检查清单](http:\u002F\u002Fdeon.drivendata.org\u002F) - Deon是由DrivenData开发的命令行工具，可轻松将伦理检查清单添加到您的数据科学项目中。\n* [设计伦理型人工智能体验检查清单与协议](https:\u002F\u002Fresources.sei.cmu.edu\u002Flibrary\u002Fasset-view.cfm?assetid=636620) - 该文件旨在指导多元化的团队在共同伦理价值观的基础上，开发可问责、低风险、尊重用户、安全可靠、诚实可信且易用的人工智能系统。卡内基梅隆大学软件工程研究所。\n* [Ethical OS工具包](https:\u002F\u002Fethicalos.org\u002F) - 该工具包深入探讨8个潜在风险领域，以评估技术团队可能面临的挑战；同时提供14个具体场景示例，以及7种面向未来的伦理应对策略。\n* [伦理画布](https:\u002F\u002Fwww.ethicscanvas.org\u002Findex.html) - 受传统商业画布启发的资源，采用类似便利贴的方式，以互动形式头脑风暴项目中可能遇到的伦理风险、机遇及解决方案。\n* [Kat Zhou的“设计伦理”资源](https:\u002F\u002Fwww.designethically.com\u002Ftoolkit) - 一套可供各团队组织的工作坊，用于识别挑战、评估现有风险，并针对潜在的伦理问题采取行动。\n* [机器学习保障](https:\u002F\u002Fmonitaur.ai\u002Fblog\u002Fmachine-learning-assurance\u002F) - 快速了解机器学习保障：记录、理解、验证和审计机器学习模型及其相关交易的过程。\n* [Markula中心的工程\u002F设计实践伦理工具箱](https:\u002F\u002Fwww.scu.edu\u002Fethics-in-technology-practice\u002Fethical-toolkit\u002F) - 一套实用且通俗易懂的工具箱，包含七个组成部分，帮助从业者反思并判断其运营所基于的道德依据。\n* [微软人工智能公平性检查清单](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fproject\u002Fai-fairness-checklist\u002F)\n* [ODEP雇主检查清单：利用电子招聘筛选系统促进残疾人就业，包括人工智能技术的应用](https:\u002F\u002Fwww.peatworks.org\u002Fwp-content\u002Fuploads\u002F2020\u002F10\u002FEARN_PEAT_eRecruiting_Checklist.pdf) - 由美国劳工部残疾就业政策办公室（ODEP）资助的“残疾包容性雇主援助与资源网络”（EARN）和“就业与无障碍技术伙伴关系”（PEAT）合作编制的包容性人工智能检查清单，专为雇主设计。该清单为领导层、人力资源部门、平等就业机会管理者及采购人员提供了指导，帮助他们在审查用于招聘和候选人评估的人工智能工具时，确保其公平性和对残疾人的包容性。\n* [开放伦理成熟度模型（OEMM）](https:\u002F\u002Fopenethics.ai\u002Foemm\u002F) - 开放伦理成熟度模型是一个五级框架，引导组织迈向人工智能及自主系统的透明治理之路。\n* [旧金山市伦理与算法工具箱](https:\u002F\u002Fethicstoolkit.ai\u002F) - 面向使用算法的政府领导者和工作人员的风险管理框架，提供两步评估流程：算法评估流程及风险应对流程。\n* [英国政府数据伦理工作手册](https:\u002F\u002Fwww.gov.uk\u002Fgovernment\u002Fpublications\u002Fdata-ethics-workbook\u002Fdata-ethics-workbook) - 由数字、文化、媒体和体育部（DCMS）编写的资源，提供了一系列可供公共部门从业者提出的问题，这些问题围绕其[数据伦理框架原则](https:\u002F\u002Fwww.gov.uk\u002Fgovernment\u002Fpublications\u002Fdata-ethics-framework\u002Fdata-ethics-framework)逐一展开。\n* [美国NIST人工智能风险管理框架](https:\u002F\u002Fwww.nist.gov\u002Fitl\u002Fai-risk-management-framework) - 该框架旨在帮助人工智能系统的开发者、使用者和评估者更好地管理可能影响个人、组织、社会或环境的人工智能风险。\n* [世界经济论坛的采购指南](https:\u002F\u002Fwww.weforum.org\u002Fpress\u002F2019\u002F09\u002Fuk-government-first-to-pilot-ai-procurement-guidelines-co-designed-with-world-economic-forum\u002F) - 世界经济论坛制定了一套供各国政府安全可靠地采购机器学习相关系统的指南，并已在英国政府进行了试点。\n\n# 研究与行业通讯\n\n* [AI安全通讯](https:\u002F\u002Fnewsletter.safe.ai\u002F) - 由AI安全中心每周发布，面向非技术受众，提供关于AI研究、政策及其他领域的最新动态。\n* [Import AI](https:\u002F\u002Fjack-clark.net\u002F) - 由OpenAI的杰克·克拉克策划的通讯，精选最新且相关的AI研究，以及与AI技术研究交叉的社会议题。\n* [马特的间隙思考](https:\u002F\u002Fwww.getrevue.co\u002Fprofile\u002Fmattclifford) - 由Entrepreneur First首席执行官马特·克利福德策划的通讯，对地缘政治、深度科技初创企业、经济学等领域的话题进行精选和批判性分析。\n* [机器学习安全通讯](https:\u002F\u002Fnewsletter.mlsafety.org\u002F) - 同样由AI安全中心发布，不定期深入探讨AI技术研究中的关键成果。\n* [蒙特利尔AI伦理研究所每周AI伦理通讯](https:\u002F\u002Faiethics.substack.com) - 由[阿比谢克·古普塔](https:\u002F\u002Fatg-abhishek.github.io)及其在[蒙特利尔AI伦理研究所](https:\u002F\u002Fmontrealethics.ai)的团队策划的每周通讯，以通俗易懂的方式总结技术和学术论文，并对AI伦理领域的最新进展进行评论。\n* [机器学习工程师](https:\u002F\u002Fethical.institute\u002Fmle.html) - 由伦理AI与机器学习研究所策划的通讯，收录来自资深机器学习从业者的精选文章、教程和博客，涵盖机器学习可解释性、可复现性、模型评估、特征分析等方面的最佳实践、工具和技术见解。","# awesome-artificial-intelligence-regulation 快速上手指南\n\n`awesome-artificial-intelligence-regulation` 并非一个需要编译或运行的软件库，而是一个** curated list（精选列表）**。它汇集了全球范围内关于人工智能监管、原则、伦理框架、行业标准及法律法规的资源。\n\n对于中国开发者而言，该仓库的核心价值在于提供了一份结构化的“合规地图”，帮助快速定位中国及全球的 AI 政策动向。以下是基于该仓库内容的快速使用指南。\n\n## 环境准备\n\n本项目无需特定的操作系统或运行时环境（如 Python\u002FNode.js），仅需具备以下条件：\n*   **网络环境**：能够访问 GitHub (`github.com`)。\n*   **阅读工具**：现代浏览器（推荐 Chrome, Edge, Firefox）或 Markdown 阅读器。\n*   **前置依赖**：无。\n\n> **提示**：如果访问 GitHub 速度较慢，建议使用国内代码托管平台（如 Gitee）搜索是否有同步镜像，或配置本地 hosts 加速访问。\n\n## 安装步骤（获取资源）\n\n你可以通过以下两种方式获取该指南内容：\n\n### 方式一：在线直接浏览（推荐）\n直接在浏览器中打开仓库的 README 页面，利用目录跳转查看各国法规。\n*   **地址**: `https:\u002F\u002Fgithub.com\u002FEthicalML\u002Fawesome-artificial-intelligence-guidelines` (注：原仓库名可能指向此实际维护地址)\n\n### 方式二：克隆到本地\n如果你希望离线阅读或作为团队知识库的一部分，可以使用 Git 克隆：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FEthicalML\u002Fawesome-artificial-intelligence-guidelines.git\ncd awesome-artificial-intelligence-guidelines\n```\n\n*(注：如果原链接 `awesome-artificial-intelligence-regulation` 无法克隆，请尝试上述实际维护仓库地址)*\n\n## 基本使用\n\n本仓库的使用核心是**检索与对照**。以下是针对中国开发者的最简使用路径：\n\n### 1. 查阅中国本土法规\n在文档的 **\"National Regulation by Economic Area\"** 表格中，点击 **China 🇨🇳** 锚点，或直接滚动至 **China** 章节。重点关注以下核心文件（已收录于列表中）：\n\n*   **生成式 AI 专项**：\n    *   [China's Interim Measures for the Management of Generative Artificial Intelligence Services](https:\u002F\u002Fwww.cac.gov.cn\u002F2023-07\u002F13\u002Fc_1690898327029107.htm) (生成式人工智能服务管理暂行办法) - *2023 年 8 月 15 日生效，开发大模型应用必读。*\n*   **数据安全与隐私**：\n    *   [China Internet Security Law](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FChina_Internet_Security_Law) (网络安全法)\n    *   [China's Personal Information Security Specification](https:\u002F\u002Fwww.newamerica.org\u002Fcybersecurity-initiative\u002Fdigichina\u002Fblog\u002Ftranslation-chinas-personal-information-security-specification\u002F) (个人信息安全规范)\n*   **伦理原则**：\n    *   [Beijing AI Principles](https:\u002F\u002Fwww.baai.ac.cn\u002Fblog\u002Fbeijing-ai-principles) (北京人工智能原则)\n\n### 2. 对比国际标准（出海参考）\n若你的产品涉及海外市场，可通过侧边栏或目录快速切换至对应区域：\n*   **欧盟**: 查看 `EU AI Act` (欧盟人工智能法案) 和 `GDPR`。\n*   **美国**: 查看 `United States of America` 章节下的各州及联邦指南。\n*   **其他**: 支持新加坡、巴西、印度等新兴市场的政策查询。\n\n### 3. 使用实用工具\n在 **\"Other Sections\"** 下，寻找 **🔨 Interactive & Practical Tools** 和 **🔏 Processes & Checklists** 部分。\n*   例如：迪拜的 **Ethical AI Toolkit** 提供了自评估工具，可借鉴其检查清单来构建内部的合规流程。\n\n### 4. 订阅更新\nAI 法规更新迅速，建议通过文档底部的链接订阅 **Machine Learning Engineer** 通讯，以获取最新的开源框架和政策解读。\n\n---\n*本指南旨在帮助开发者快速定位合规资源，具体法律条文请以官方发布的正式文本为准。*","一家面向欧洲市场的金融科技初创公司正在开发一款基于 AI 的信贷审批系统，团队急需确保产品符合欧盟及全球各地的合规要求。\n\n### 没有 awesome-artificial-intelligence-regulation 时\n- 法务与研发团队在海量分散的网络信息中盲目搜索，难以区分各国（如奥地利、巴西、中国）最新的 AI 法案草案与正式法规。\n- 因缺乏统一的映射视图，团队容易遗漏关键地区的特定伦理原则，导致产品设计初期就埋下合规隐患。\n- 手动整理行业标准与检查清单耗时数周，严重拖慢了产品上线进度，且无法保证信息的时效性。\n- 面对不同司法管辖区的冲突条款，团队缺乏权威参考依据，难以制定统一的内控策略。\n\n### 使用 awesome-artificial-intelligence-regulation 后\n- 团队直接通过该仓库按经济区域（如欧盟、美国、新加坡）快速定位到具体的法规文件（如欧盟 AI 法案、加拿大 AIDA），大幅缩短调研时间。\n- 利用其梳理的高层框架与行业倡议，团队迅速对齐了透明度与问责制等核心伦理指标，从源头规避设计风险。\n- 借助集成的交互式工具与检查清单，开发人员能在编码阶段即时自查，将原本数周的合规评估压缩至几天内完成。\n- 依托持续维护的多语言资源库，团队能实时追踪全球政策动态，灵活调整跨国业务的合规策略以应对监管变化。\n\nawesome-artificial-intelligence-regulation 将碎片化的全球 AI 治理生态转化为清晰的导航图，帮助企业在复杂的监管环境中高效构建负责任的 AI 系统。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FEthicalML_awesome-artificial-intelligence-regulation_59e2c5d6.png","EthicalML","The Institute for Ethical Machine Learning","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FEthicalML_2beff565.jpg","The Institute for Ethical Machine Learning is a think-tank that brings together with technology leaders, policymakers & academics to develop standards for ML.",null,"a@ethical.institute","http:\u002F\u002Fethical.institute","https:\u002F\u002Fgithub.com\u002FEthicalML",1425,190,"2026-04-18T06:17:35","MIT",1,"","未说明",{"notes":30,"python":28,"dependencies":31},"该工具是一个 curated list（精选列表）类型的文档仓库，主要收集了全球各国关于人工智能的法规、原则、指南和标准链接。它不包含可执行的代码、模型或软件程序，因此无需特定的操作系统、GPU、内存、Python 版本或依赖库即可使用。用户只需通过浏览器阅读 Markdown 文档或访问列出的外部链接即可。",[],[33,34,35,36],"图像","Agent","数据工具","开发框架",[38,39,40,41,42,43,44,45,46,47,48,49,50,51,52],"ethics-frameworks","machine-learning","data-protection","principles","regulation","institute-for-ethical-ai","ethical-ai","ai-ethics","data-ethics","privacy","guidelines","ai","ai-guidelines","ai-policy","machine-learning-guidelines",2,"ready","2026-03-27T02:49:30.150509","2026-04-18T22:35:17.792128",[],[],[60,69,77,86,95,103],{"id":61,"name":62,"github_repo":63,"description_zh":64,"stars":65,"difficulty_score":66,"last_commit_at":67,"category_tags":68,"status":54},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[34,36,33,35],{"id":70,"name":71,"github_repo":72,"description_zh":73,"stars":74,"difficulty_score":66,"last_commit_at":75,"category_tags":76,"status":54},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[36,33,34],{"id":78,"name":79,"github_repo":80,"description_zh":81,"stars":82,"difficulty_score":53,"last_commit_at":83,"category_tags":84,"status":54},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",160015,"2026-04-18T11:30:52",[36,34,85],"语言模型",{"id":87,"name":88,"github_repo":89,"description_zh":90,"stars":91,"difficulty_score":26,"last_commit_at":92,"category_tags":93,"status":54},8272,"opencode","anomalyco\u002Fopencode","OpenCode 是一款开源的 AI 编程助手（Coding Agent），旨在像一位智能搭档一样融入您的开发流程。它不仅仅是一个代码补全插件，而是一个能够理解项目上下文、自主规划任务并执行复杂编码操作的智能体。无论是生成全新功能、重构现有代码，还是排查难以定位的 Bug，OpenCode 都能通过自然语言交互高效完成，显著减少开发者在重复性劳动和上下文切换上的时间消耗。\n\n这款工具专为软件开发者、工程师及技术研究人员设计，特别适合希望利用大模型能力来提升编码效率、加速原型开发或处理遗留代码维护的专业人群。其核心亮点在于完全开源的架构，这意味着用户可以审查代码逻辑、自定义行为策略，甚至私有化部署以保障数据安全，彻底打破了传统闭源 AI 助手的“黑盒”限制。\n\n在技术体验上，OpenCode 提供了灵活的终端界面（Terminal UI）和正在测试中的桌面应用程序，支持 macOS、Windows 及 Linux 全平台。它兼容多种包管理工具，安装便捷，并能无缝集成到现有的开发环境中。无论您是追求极致控制权的资深极客，还是渴望提升产出的独立开发者，OpenCode 都提供了一个透明、可信",144296,"2026-04-16T14:50:03",[34,94],"插件",{"id":96,"name":97,"github_repo":98,"description_zh":99,"stars":100,"difficulty_score":53,"last_commit_at":101,"category_tags":102,"status":54},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",109154,"2026-04-18T11:18:24",[36,33,34],{"id":104,"name":105,"github_repo":106,"description_zh":107,"stars":108,"difficulty_score":53,"last_commit_at":109,"category_tags":110,"status":54},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[94,34,33,36]]