Focus groups related to
various AI fields of Applications
European Declaration on AI
Principles
Status
Voluntary
Type
Outcome document
Year
2018
Stakeholders
Other
Participants
EU member states
A declaration to join forces and engage in an European approach to deal with AI.
UN General Assembly, Resolution on safe, secure and trustworthy AI for sustainable development
Rules & norms
Status
Voluntary
Type
Outcome document
Year
2024
Stakeholders
Economic
Participants
UN member states (193)
A UN General Assembly resolution that aims to bridge the artificial intelligence (AI) and other digital divides between and within countries and promote safe, secure and trustworthy AI systems to accelerate progress towards the full realization of the 2030 Agenda for Sustainable Development.
Council of Europe, Convention on AI, human rights, democracy and the rule of law
Rules & norms
Status
Binding
Type
Outcome document
Year
2024
Stakeholders
Social
Participants
Retrieving data. Wait a few seconds and try to cut or copy again.
The first-ever international legally binding treaty aimed at ensuring the respect of human rights, the rule of law and democracy, and introduced legal standards in the use of AI systems.
AI Partnership for Defence
Use case & good practices
Status
Voluntary
Type
Ongoing process
Year
2020-
Stakeholders
Military
Participants
US + Australia, Canada, Denmark, Estonia, Finland, France, Israel, Japan, Norway, South Korea, Sweden, UK
Led by the US Department of Defense Joint AI Centre (JAIC), PfD is a recurring forum for like-minded defence partners to discuss respective policies, approaches, challenges and solutions in adopting AI-enabled capabilities.
NATO DARB, responsible AI certification
Technical standards
Status
Voluntary
Type
Ongoing process
Year
2023-
Stakeholders
Military
Participants
NATO member countries
A user-friendly and responsible Artificial Intelligence (AI) certification standard to help industries and institutions across the Alliance make sure that new AI and data projects are in line with international law, as well as NATO’s norms and values, notably in terms of governability, traceability and reliability.
European Commission, AI Act
Legislation & regulation
Status
Binding
Type
Outcome document
Year
2024
Stakeholders
Economic
Participants
European Commission, EU member states.
The AI Act is legal framework that provides AI developers and deployers with requirements and obligations regarding specific uses of AI.
European Commission, Coordinated Plan on AI (2021)
Strategy & policy
Status
Voluntary
Type
Outcome document
Year
2021
Stakeholders
Economic
Participants
European Commission, EU member states, Norway and Switzerland
The AI Plan is joint commitment to accelerate investment in AI, implement AI strategies and programmes and align AI policy to avoid fragmentation within Europe
OECD/G20 Recommendation of the Council on Artificial Intelligence
Principles
Status
Voluntary
Type
Outcome document
Year
2019
Stakeholders
Other
Participants
46 countries - 38 member countries and 8 non-members
Intergovernmental policy guidelines on AI to uphold international standards that aim to ensure AI systems are designed to be robust, safe, fair and trustworthy.
NATO AI Strategy
Strategy & policy
Status
Voluntary
Type
Outcome document
Year
2021
Stakeholders
Military
Participants
NATO member countries
Strategy aims to encourage responsible development, adoption, and use of AI for defence and security purposes for capability development and delivery and identify and safeguard against the threats from malicious use of AI by state and non-state actors.
European Commission White Paper on AI (2020)
Rules & norms
Status
Voluntary
Type
Outcome document
Year
2020
Stakeholders
Economic
Participants
EU Member States
The White Paper aims to foster a common European approach to AI and offers a range of policy options. A White Paper can pave the way for future legislative instruments.
NATO Principles of Responsible Use
Principles
Status
Voluntary
Type
Outcome document
Year
2021
Stakeholders
Military
Participants
NATO member countries
Principles are based on existing ethical, legal, and policy commitments under which NATO has historically operated and continues to operate under - Lawfulness, Responsibility and Accountability, Explainability and Traceability; Reliability; Governability and Bias Mitigation.
ITU, Focus Groups related to various AI fields of application
Use case & good practices
Status
Voluntary
Type
Ongoing process
Year
2018-
Stakeholders
Social
Participants
UN member states
FG-414A is set up to explore the potential of emerging technologies including AI in supporting data acquisition and handling, improving modelling on agricultural and geospatial data, providing effective communication to optimise agricultural production processes, examine key concepts, and relevant gaps in current standardization landscape in agriculture, and inform best practices and barriers related to the use of AI; FG-AI4H is a partnership of ITU and WHO to establish a standardized assessment framework for the evaluation of AI-based methods for health, diagnosis, triage or treatment decisions.
UNESCO Recommendation on the Ethics of AI
Principles
Status
Voluntary
Type
Outcome document
Year
2021
Stakeholders
Social
Participants
UN member states
Through 11 policy areas, the document highlights concrete policy actions to be undertaken for ethical development, deployment and use of AI.
UN Group of Government experts on Lethal Autonomous Weapons Systems
Rules & norms
Status
Voluntary
Type
Ongoing process
Year
2024-2025
Stakeholders
Military
Participants
UNESCO member states and associate members
Open-ended Group of Governmental Experts's to design normative and operational framework in lethal autonomous weapon systems
UK and others, Bletchley Declaration / AI Safety Summit
Rules & norms
Status
Voluntary
Type
Ongoing process
Year
2023-
Stakeholders
Technological
Participants
Global AI community.
The first of global AI Safety Summits, held in the UK in 2023, to facilitate consensus on approaching frontier AI technologies and established a new track in global AI discussions. A second summit took place in May 2024 in Seoul.
US and others, Political Declaration on Responsible Military Use of AI and Autonomy
Rules & norms
Status
Voluntary
Type
Outcome document
Year
2023
Stakeholders
Military
Participants
The US and 53 other governments.
A US proposal for a normative framework addressing the use of these capabilities in the military domain.
G7, Hiroshima Process
Rules & norms
Status
Voluntary
Type
Outcome document
Year
2023
Stakeholders
Economic
Participants
Leaders of the G7
The Hiroshima AI Process Comprehensive Policy Framework is an international framework that includes guiding principles and code of conduct aimed at promoting the safe, secure and trustworthy advanced AI systems. It was agreed upon at the G7 Digital & Tech Ministers’ Meeting in December 2023 and was endorsed by the G7 Leaders in 2023.
Quad Standards Coordination Group
Use case & good practices
Status
Voluntary
Type
Ongoing process
Year
2021-
Stakeholders
Technological
Participants
Quad members: Australia, India, Japan and US
A Quad working group to coordinate approaches to technical standards, and engagement in SDOs.
Quad Principles on Critical and Emerging Technology Standards
Principles
Status
Voluntary
Type
Outcome document
Year
2023
Stakeholders
Technological
Participants
Quad members: Australia, India, Japan and US
The Quad counries commit t to support industry-led, consensus-based multi-stakeholder approaches to the development of technology standards.
Quad Principles on Technology Design, Development, Governance, and Use
Principles
Status
Voluntary
Type
Outcome document
Year
2021
Stakeholders
Technological
Participants
Quad members: Australia, India, Japan and US
The Quad countries affirm that the ways in which technology is designed, developed, governed, and used should be shaped by our shared democratic values and respect for universal human rights.
IBM, Principles for trust and transparency
Principles
Status
Voluntary
Type
Outcome document
Year
2018
Stakeholders
Technological
Participants
IBM
It describes IBM's core principles – grounded in commitments to Trust and Transparency – that guide its handling of client data and insights, and also its responsible development and deployment of new technologies, such as IBM Watson.
Tech Accord to combat deceptive use of AI in elections
Rules & norms
Status
Voluntary
Type
Outcome document
Year
2024
Stakeholders
Technological
Participants
AI companies globally.
This accord seeks to set expectations for how signatories will manage the risks arising from deceptive AI election content created through their publicly accessible, large-scale platforms or open foundational models, or distributed on their large-scale social or publishing platforms in line with their own policies and practices as relevant to the commitments in the accord.
Microsoft, Responsible AI Standard v2
Use case & good practices
Status
Voluntary
Type
Outcome document
Year
2022
Stakeholders
Technological
Participants
Microsoft
Microsoft's internal product development requirements for responsible AI with similar principles as the previous version.
Google, Responsible AI practices
Use case & good practices
Status
Voluntary
Type
Ongoing process
Year
2018-
Stakeholders
Technological
Participants
Self-regulatory
Google's internal responsible AI practices guide with principles of fairness, interpretability, privacy, and safety.
Microsoft, Responsible AI Principles
Principles
Status
Voluntary
Type
Outcome document
Year
2018
Stakeholders
Technological
Participants
Microsoft
Microsoft's internal AI deployment guiding principles based on fairness, reliability & safety, privacy, inclusiveness, transperancy and accountability.
Google, AI Principles
Principles
Status
Voluntary
Type
Outcome document
Year
2018
Stakeholders
Technological
Participants
Self-regulatory
Google's internal AI deployment guiding principles based on 7 objectives for AI applications.
IEEE, P7000, P2247 and P2802 series
Technical standards
Status
Voluntary
Type
Ongoing process
Year
2021-
Stakeholders
Global
Participants
Organisations that nurture, develop and advance global technologies
Voluntary standards to promote the development of autonomous and intelligence systems in manner that considers human ethics and wellbeing rather than only technological feasibility.
Partnership on AI
Use case & good practices
Status
Voluntary
Type
Ongoing process
Year
2016
Stakeholders
Other
Participants
Academic, industry and non-profit organisations worldwide.
A non-profit partnership of academic, civil society, industry and media organisations to create solutions so that AI advances positive outcomes for people and society, such as tools, recommendations and other resources.
W3C, AI Knowledge Representation Community Group
Use case & good practices
Status
Voluntary
Type
Ongoing process
Year
2018-
Stakeholders
Technological
Participants
A community group of 82 participants representing 27 organisations
A community group to explore the requirements, best practices and implementation options for the World Wide Web Consortium.
IETF, framework for ANIMA (Autonomic Networking Integrated Model and Approach)
Use case & good practices
Status
Voluntary
Type
Ongoing process
Year
2020-
Stakeholders
Technological
Participants
IETF community (internet network operators)
A working group that develops and maintains specifications and documentation for interoperable protocols and procedures for automated network management.
ITU, AI for Good Summit
Use case & good practices
Status
Voluntary
Type
Ongoing process
Year
2017-
Stakeholders
Social
Participants
Global AI community.
A year-round platform where AI innovators and problem oweners can engage to learn, build and connexct to identify practical AI solutions to advance the UN Sustainable Development Goals.
Future of Life, Asilomar AI Principles
Principles
Status
Voluntary
Type
Outcome document
Year
2017
Stakeholders
Other
Participants
Global AI community.
The outcome of a two-day conference of AI experts and practitioners by the Future of Life institute - a NGO whose aim is to steer transformative technologies away from extreme, large-scale risks and towards benefiting life.
Internet Governance Forum, Policy Network on AI
Principles
Status
Voluntary
Type
Ongoing process
Year
2023-
Stakeholders
Other
Participants
Global AI community.
PNAI brings stakeholders from across the world together to foster dialogue and contribute to the global AI policy discourse.
ETSI, Technical Committee on Securing AI
Retrieving data. Wait a few seconds and try to cut or copy again.
Status
Voluntary
Type
Ongoing process
Year
2023-
Stakeholders
Technological
Participants
850 organisations from 60+ countries.
The aim of TC SAI is to develop technical specifications that mitigate against threats arising from the deployment of Ai, and threats to AI systems from both other Ais and from conventional sources.
Centre for Humanitarian Dialogue, Code of conduct on artificial intelligence in military systems
Rules & norms
Status
Voluntary
Type
Outcome document
Year
2021
Stakeholders
Military
Participants
State actors
A draft identifying possible agreements in the principles and limitations of AI use in millitary systems.
US NIST, AI Risk Management Framework
Technical standards
Status
Voluntary
Type
Outcome document
Year
2023
Stakeholders
National
Participants
Individuals, organisations, and society associated with artificial intelligence
The framework can help organisations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best aligns with their goals and priorities.
Global Partnership on AI (GPAI)
Good practices
Status
Voluntary
Type
Ongoing process
Year
2020-
Stakeholders
Technological
Participants
GPAI member countries
GPAI is a multi-stakeholder initiative which aims to bridge the gap between theory and practice on AI by supporting cutting-edge research and applied activities on AI-related priorities across four working groups - Responsible AI, Future of Work, Innovation and Commercialisation and Data Governance.
CENELEC JTC 21
Technical standards
Status
Voluntary
Type
Ongoing process
Year
2021-
Stakeholders
Technological
Participants
National Standards Bodies of the EU member states
CENELEC identifies and adopts international standards available or under development and produces standardisation deliverables that address European market and societal needs along with values underpinning EU legislation, policies, and principles.
ISO, JTC 1/SC42
Technical standards
Status
Voluntary
Type
Ongoing process
Year
2017-
Stakeholders
Technological
Participants
All ISO members
ISO/IEC JTC 1/SC 42 is the main group developing technical standards for AI.
China's Technical Committee 260, Basic safety requirements for generative AI services
Technical standards
Status
Binding
Type
Outcome document
Year
2024
Stakeholders
Technological
Participants
Chinese AI companies
China's standard for generative AI that establishes specific overisght processess that Chinese AI companies must adopt in regard to their model training data, model-generated content etc.
Digital Governance Standards Institute (Canada), Automated Decision Systems
Technical standards
Status
Voluntary
Type
Outcome document
Year
2019
Stakeholders
Social
Participants
Canadian organisations
This standard helps organisations protect human rights and incorporate ethics in their design and use of automated decision systems, limited to AI using machine learning for automated decisions.
US Federal Trade Commission, Rule on Impersonation of Government and Business
Legislation & regulation
Status
Binding
Type
Outcome document
Year
2024
Stakeholders
Economic
Participants
US businesses
This rule gives the Federal Trade Commission stronger tools to combat and deter scammers who impersonate government agencies and businesses, enabling the Commission to file federal court cases.
The White House, Executive Order of the Safe, Secure and Trustworthy Development and Use of AI
Legislation & regulation
Status
Binding
Type
Outcome document
Year
2023
Stakeholders
Economic
Participants
US federal government, and US-based AI companies
The Executive Order establishes new standards for AI safety and security in line with the Defense Production Act, and provides the mandate for NIST to act as central coordinating standards body.
China State Council, New Generation AI Development Plan
Strategy & policy
Status
Voluntary
Type
Outcome document
Year
2017
Stakeholders
Other
Participants
Chinese governments and business.
This document sets out a top-level design blueprint of China's approach to developing artificial intelligence (AI) technology and applications, setting broad goals up to 2030.
US Defence, Ethical principals for AI
Principles
Status
Voluntary
Type
Outcome document
Year
2020
Stakeholders
Military
Participants
US Defence.
These principles encompass five areas and guide DoD AI capabilities to be responsible, equitable, traceable, reliable, and governable. The principles are intended to prompt and promote continual engagement of AI ethics as a process distributed throughout the entire AI lifecycle both interactively and iteratively.
Singapore Government, AI Verify: an AI governance testing framework and toolkit
Technical standards
Status
Voluntary
Type
Outcome document
Year
2022
Stakeholders
Economic
Participants
Private sector organisations
Pilot for testing trustworthiness of AI systems deployed by companies in their products and services.
US NIST, Interagency Committee on Standards Policy, AI Standards Coordination Working Group
Use case & good practices
Status
Voluntary
Type
Ongoing process
Year
2021-
Stakeholders
National
Participants
US government
The Working Group coordinates government activities related to the development and use of AI standards.
Singapore Government, Compendium of Use Cases: Practical Illustrations of the Model AI Governance Framework
Use case & good practices
Status
Voluntary
Type
Outcome document
Year
2020
Stakeholders
Economic
Participants
Private sector organisations
Guide that outlines organisations' implementation or alignment of AI-related practices with Model AI Governance Framework.
Singapore Government, Model AI Governance Framework, Ed2
Principles
Status
Voluntary
Type
Outcome document
Year
2020
Stakeholders
Economic
Participants
Private sector organisations
Singapore government's voluntary guidance for private sector organisations regarding the responsible use of AI.
Australian Government, AI Ethics Framework (2019)
Principles
Status
Voluntary
Type
Outcome document
Year
2019
Stakeholders
Economic
Participants
Australian businesses and governments (federal, state and territory)
The framework contains principles and measures for businesses and governments to practice the highest ethical standards when designing, developing and implementing AI.
Standards Australia, An AI Standards roadmap
Strategy & policy
Status
Voluntary
Type
Outcome document
Year
2020
Stakeholders
National
Participants
Australian businesses and government agencies
Australia's framework to intervene and shape AI standards development internationally.