Skip to main content

A constructive approach to AI

The recent explosion of artificial-intelligence (AI) capability has led some to dread what’s next. This is a hot topic for international standards bodies looking to ensure standards are integrated into developments.

Artist's render of artificial intelligence

Sturdy standards

The international standards organisations ISO (International Organization for Standardization) and IEC (International Electrotechnical Commission) speak of the future more calmly – and constructively. There is nothing sensational in their approach. They won’t be making any headlines accompanied with images of the Terminator. But as in all things, these standards organisations – and the international standards community – want to chart a path that brings consistency, foresight and good outcomes for all involved.

Nor is this some recent kneejerk reaction. The joint ISO and IEC technical committee overseeing the work (ISO/IEC JTC 1/SC 42 Artificial intelligence) were formed in 2017 and have produced twenty publications already. Notable among these are two standards that act as central cogs for the rest. Both came out last year. More will follow.

Horizontal standards

Standards bodies often talk about horizontal and vertical standards. What this means is that general subject matter that covers many applications (a horizontal view) can be specified so that concepts and terminology are made consistent for specific subjects and topics (a vertical instance). The complete landscape of a particular field is viewed horizontally so that each vertical instance is consistent in terminology and framework to others.

Two such horizontal standards are:

  • ISO/IEC 22989:2022 Information technology – Artificial intelligence – Artificial intelligence concepts and terminology
  • ISO/IEC 23053:2022 Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)

The tone is set in the introduction of the latter standard. ‘By establishing a common terminology and a common set of concepts for such systems, this document provides a basis for the clear explanation of the systems and various considerations that apply to their engineering and to their use.’

The point of this wide-reaching committee is well summarised in a recent IEC blog:

Generative AI for genetics?(external link)

‘SC 42 develops international standards for artificial intelligence. Its unique holistic approach considers the entire AI ecosystem, by looking at technology capability and non-technical requirements, such as business and regulatory and policy requirements, application domain needs, and ethical and societal concerns.’

Planning the future

With large language models (LLMs) like Chat GPT pushing the cutting edge of generative AI into our homes, there is a realisation that AI is going to have radical effect on many industries and professions, perhaps all industries and professions. The goal of the standards programme in this area is to help those planning a role for AI in any area to have tools to hand to make sensible decisions and institute reliable policies to guide their work. Good examples are standards and technical reports covering the following (and more):

  • guidance for risk management for AI
  • bias in AI systems and aided decision making
  • assessing trustworthiness of AI
  • societal and ethical concerns around AI
  • systems and software quality requirements and assessment for AI systems

Recommended reading

ISO and IEC continue to publish many interesting articles on AI matters. The following are recommended:

Ready Or Not, Here Comes AI - ISO Annual Meeting 2023(external link) [Video of panel from ISO conference 2023]

Forging a positive AI mindset: Dispelling the fear and embracing the potential of artificial intelligence(external link)

Artificial intelligence experts win prestigious ISO award: This year’s LDE Award celebrates a trailblazing committee laying the groundwork for the future of AI(external link)

Artificial intelligence: Rewards, risks and regulation(external link)

Artificial intelligence: Enhancing the trustworthiness of neural networks(external link)