Interviews, insight & analysis on digital media & marketing

Navigating the EU AI Act

By Hannah McCarthy, Head of Legal at Endava

On the 1 August, the European Union’s (EU) pioneering Artificial Intelligence (AI) Act finally came into force, marking a significant milestone in the regulation of AI. 

The Act is a comprehensive legislative framework, designed to regulate AI applications and ensure their alignment with EU values and fundamental rights. The Act categorises AI systems based on their risk levels, ranging from minimal to unacceptable risk, and imposes corresponding obligations to ensure transparency, safety, and accountability (amongst other requirements). This legislation aims to foster innovation while protecting individuals and maintaining trust in AI technologies.

For many, this development is an essential step towards ensuring the responsible and ethical use of AI technologies. Only time will tell whether the regulation succeeds in curating a culture of compliance and innovation.

Global reach

As well as setting a precedent for other regions to follow, the Act will affect the global community through its extra-territorial effect. The legislation is applicable, not only to AI systems developed and used within the EU, but also to those offered to EU customers or affecting EU citizens, regardless of where the providers are located. AI developers and providers outside of the EU must also adhere to these regulations if they wish to operate within the European market. Much like other pioneering EU regulations, the AI Act has the potential to set a global standard for compliance.

Key provisions

To avoid over-legislating and curtailing innovation, the EU Act classifies systems into four risk categories: unacceptable risk, high risk, limited risk, and minimal risk. Systems posing unacceptable risks are prohibited, while high-risk systems must meet strict requirements.

For example, systems used in critical infrastructure, insurance, banking and healthcare, could be considered high-risk. As such, they must comply with risk management, data governance, transparency and human oversight obligations (etc).

To avoid blurring the line between human and machine interaction, the Act has been designed with transparency in mind. Clear disclosures must be in place when individuals are interacting with AI, and AI’s decision-making process should be easily understandable.

The importance of data governance and human oversight is clear throughout the regulation. Robust data governance measures are mandated to ensure the quality and integrity of data used by AI systems, protecting against biases and inaccuracies. The Act also emphasises the importance of human oversight, requiring AI systems to remain under human control to ensure ethical decision-making and accountability.

Implications for the future

The enactment of the EU AI Act is a landmark moment that will shape the future of AI development and deployment in Europe and beyond.  The staggered timeline for compliance has now been cemented with deadlines that will come into effect over the next few years. 

The first compliance deadline, concerning AI systems classified as posing an unacceptable risk, will take effect from 1 February 2025.  As they align development processes and compliance, it’s important for technology companies to work towards a trustworthy AI ecosystem. 

Related articles