Interviews, insight & analysis on digital media & marketing

BSI publishes ‘first’ standard for responsible AI application

The British Standards Institute (BSI), the national standards body of the UK, has unveiled a ‘first-of-its-kind’ international standard for the responsible use of artificial intelligence.

The framework (BS ISO/IEC 42001) is aimed at assisting businesses in the ‘safe, secure, and responsible’ use of AI, addressing factors such as non-transparent automatic design-making, the utilisation of machine learning for system design, and continuous learning.

“AI is a transformational technology. For it to be a powerful force for good, trust is critical,” said Susan Taylor Martin, CEO at BSI. “The publication of the first international AI management system standard is an important step in empowering organizations to responsibly manage the technology, which in turn offers the opportunity to harness AI to accelerate progress towards a better future and a sustainable world. BSI is proud to be at the forefront of ensuring AI’s safe and trusted integration across society.”

It lays out how to establish, implement, maintain, and continually improve an improve an AI management system, providing requirements to facilitate context-based AI risk assessments, with a detail on risk treatments and controls for internal and external AI products and services.

Importantly, the framework is designed to encourage a ‘quality-centric culture’ that leads to the development of responsible AI-enabled products and services that benefit businesses and wider society.

“AI technologies are being widely used by organisations in the UK despite the lack of an established regulatory framework. While government considers how to regulate most effectively, people everywhere are calling for guidelines and guardrails to protect them. In this fast-moving space, BSI is pleased to announce publication of the latest international management standard for industry on the use of AI technologies, which is aimed at helping companies embed safe and responsible use of AI in their products and services,” said Scott Steedman, Director General, Standards at BSI.

“Medical diagnoses, self-driving cars and digital assistants are just a few examples of products that already benefit from AI. Consumers and industry need to be confident that in the race to develop these new technologies we are not embedding discrimination, safety blind spots or loss of privacy. The guidelines for business leaders in the new AI standard aim to balance innovation with best practice by focusing on the key risks, accountabilities and safeguards.”

A recent BSI poll of 10,000 adults across nine countries found that 62% of people wanted to see international guidelines to enable the safe use of AI. Almost two-fifths of adults are already using AI every day at work, while 62% are expecting their industries to do so by 2030.

Related articles