Technical Standards
in Artificial Intelligence

In this module, we introduce technical standards in Artificial Intelligence, how they are developed, how they relate to diplomacy and explain the differences between standards, ethics, principles and legislation in AI. 

Standards often function ‘under the radar’- and technical standards-negotiation processes can be inaccessible and exclusive. But these standards are fundamental to all forms of governance, ranging from international rules and principles to domestic legislation and regulation.

Video Thumbnail for Artificial Intelligence in Tech Diplomacy by Barth Hogeveen

Main Take - aways

01
Takeaway #1
sits here
02
Takeaway #2
sits here
03
Takeaway #3
sits here

Why do technical standards matter for AI governance?

Technical standards are frequently described as ‘technical’ and ‘apolitical’. However, it’s important to remember that standards are constructed by people and therefore inevitably reflect the beliefs and values of the people, organisations, countries and cultures debating and negotiating their final form. Equally, standards are influenced by the current state of technology and the contemporary architecture of global governance.

As the EU’s AI Act (2024) comes into force, the US President’s executive order on ‘Safe, secure and trustworthy AI’ (2023) is implemented and China enforces its basic safety requirements (2024), the next phase of AI governance will increasingly focus on technical standards.

That’s the reason why diplomats, regulators, industry and civil society are currently so focused on ‘getting the appropriate standards’, ‘getting those standards right’ and ‘setting them at the right bar’.

    How do technical standards differ from norms, principles and regulation?

    While government and industry leaders share common ground in addressing global concerns about the misuse of AI, there is no agreement about the best way of organising global governance.

    No single treaty, convention or declaration to govern AI is in sight. Instead, it’s more likely that a patchwork of agreed and accepted set of principles and standards will emerge—among a large group of governments and non-government organisations.

    This patchwork will include various mechanisms of governance, such as:

    • International rules and norms: political statements or intergovernmental consensus texts that articulate what states and other entities should and shouldn’t do—but they aren’t legally binding and can’t be enforced. Nonetheless, they have discursive influence and can shape international agendas.

    • Principles and ethical guidelines: self-defined and non-binding statements of intent by governments, industry, or both through which they self-regulate. This instrument offers flexibility to move along with changes in technology and societal demands; and accommodate the challenges that come with AI technologies where governments don’t have a monopoly on systems operating from within their borders and don’t have the means to inspect or audit actual AI systems.

    • Strategies, policies and road maps: national-level documents that set out a time-bound mission and vision. Governments, industry sectors and companies adopt strategies and road maps to articulate their principles, objectives, priorities, concerns, opportunities, resources and so on.

    • Legislation: one of governments’ most powerful governance tools. It’s a means to define specific norms or standards and enforce compliance with them, by means of civil or criminal punishment for violations. However, legislative and regulatory action in the fast-moving technology sphere comes with specific challenges. Legislation takes time to negotiate and pass parliaments (years, rather than months); must be sufficiently well drafted to be both implementable and enforceable; and must take unknown future technological developments into account.

    • Use-cases and good practice: Good policy is based on evidence- and data-driven research that diagnoses the core problem of an issue, and on designing and testing relevant, effective policy or regulatory interventions. In the pre-standardisation phase of a new technology, when the developers or owners of the technology still need to define standardisation requirements and build a community of support, most effort goes into documenting use-cases and good practices that will then form the basis for future rules, norms and standards.

    • Technical standards: standards address issues related to ‘the technical core of AI’; they specify objective and verifiable product or process requirements with due deference to technical experts from industry and academia. As with some other governance instruments, technical standards tend to be used as a form of ‘self-regulation’, or co-regulation, since they rely on voluntary take-up. However, technical standards are also frequently embedded into regulations, international trade agreements and government procurement contracts.

    Why should governments be interested in technical standards?

    AI governance is a live issue.

    Governments across the world will have to determine how they want to relate to strategic competition between the US and China – both powers want to maintain or acquire a technological edge in AI technologies.

    There are many concurrent standards-related processes are currently underway. The bulk involve preliminary (pre-standardisation) work that focuses on establishing an evidence base of good practices. The practical development of actual international standards on AI still seems to be in a very early stage of progress but overall, the US and China seem to be leading the pack, followed by Europe. This is a consequence of their size, scope and resources of their national standardisation communities as well as their indigenous AI industrial bases.

    The rest of the world, including the Indo-Pacific, is largely playing catch-up in most AI standards initiatives. Not being adequately represented is a strategic risk for countries in this region. After all, being part of the conversations and negotiations is everything, since ‘if you’re not at the table, you’re at risk of being on the menu.’

    Where are technical standards for AI being developed?

    International Organization for Standardization / International Electrotechnical Commission.

    Sub-committee 42 of the Joint Technical Committee 1 (JTC1/SC42), created in 2017, is the group responsible for standardization in the field of AI within the ISO and IEC. Participants in ISO/IEC groups operate under mandates from their National Standards Bodies. Since 2017, the work of the subcommittee has been coordinated and chaired by the American National Standards Institute (ANSI). In 2022 and 2023, JTC1/SC42 managed to develop and update some 20 AI-related standards. In that respect, it’s one of the most productive groups.

    IEEE Standards Association.

    The IEEE Standards Association is an industry-led group. Its committee on AI was established in February 2021 and looks at standards that ‘enable the governance and practice of artificial intelligence related to computational approaches to machine learning, algorithms, and related data usage’.

    The IEEE’s work on standards tends to focus more on compatibility and performance within specific sectors, whereas ISO and ITU work tends to focus on harmonisation among national standards-setters.

    In April 2024, it had 21 standards in draft and had completed three. Participants in the IEEE Standards Association are paying members and predominantly (96%) draw from industry and academia.

    International Telecommunication Union.

    The ITU’s work on AI has been conducted in focus groups, which are ad hoc groups that respond to an acute perceived need but don’t have a standardisation mandate from the member states. That means that their work isn’t part of the official agenda and primarily involves ‘pre-standardisation’ activities: baseline research to assess the need, requirements and options for potential future standardisation.

    The work of focus groups is self-organised and self-funded by the initiating member states, although they operate under the authority of a parent study group. Study groups are established by consensus by the World Telecommunication Standardisation Assembly, which is the governing body of member countries. At the time of writing, the ITU (April 2024) runs three focus groups that deal with AI: AI for digital agriculture (from 2021), AI for national disaster management (from 2020) and AI for health (2018–2023, merged into the Global Initiative on AI in Health). Two completed AI-related focus groups dealt with the environmental efficiency of AI (2019–22) and AI for autonomous and assisted driving (2019–22).

    Are there other technical-expert-driven initiatives?

    While not intended to develop technical standards per se, the GPAI and the UN Advisory Body on AI are two experts-focused initiatives, outside of current conventional governance mechanisms, that may affect the future direction of AI governance and standards-setting for AI.

    Global Partnership on AI.

    Formed out of a Canada–France joint initiative in 2018, the GPAI was established in 2020 to ‘bridge the gap between theory and practice on AI’. It’s made up of member countries that bring together leading experts from industry, civil society, governments and academia.

    The GPAI was initially conceived as an AI version of the Intergovernmental Panel on Climate Change; that is, an entity that would provide scientific, technical and socio-economic evidence and analyses of impacts and future risks. Therefore, the group didn’t intend to touch on AI governance or developing norms and rules.

    By April 2024, 29 countries had signed up to the GPAI, and about 120 experts had joined by either nomination or invitation.

    UN Secretary-General’s AI Advisory Body

    In October 2023, UN Secretary-General Antonio Guterres established the AI Advisory Body of experts in the governance of AI or domains of AI application from government, industry, civil society and academia. The purpose of the body is to make preliminary recommendations on three areas: the international governance of AI; a shared understanding of risk and challenges; and key opportunities and enablers to leverage AI to accelerate the delivery of the UN Sustainable Development Goals.

    The recommendations will feed into the Global Digital Compact that the Secretary-General will present during the Summit of the Future in late 2024. One goal of the GDC is to determine the role of the UN in the international governance of emerging technologies, including AI. A first interim report of the Advisory Body and a draft text of the GDC were circulated in April 2024.

    Back to top