By Nick Stringer, a global technology, public policy, and regulatory affairs adviser. His extensive experience includes serving as the former Director of Regulatory Affairs at the UK Internet Advertising Bureau (IAB UK).
“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks,” Physicist Stephen Hawking
2024, a year marked by elections and global division, is drawing to a close. While national politics has dominated headlines, the lack of international cooperation on pressing global issues is deeply concerning. As the UK grapples with the aftermath of devastating floods, climate change immediately comes to mind. However, a less visible but equally perilous issue is the absence of global tech leadership.
As we enter the age of Generative AI, a race for technological supremacy has ensued among global superpowers. Many countries are developing their own AI models and strategies, yet there’s a dangerous void in international regulation. The lack of a unified approach to AI governance risks leaving us unprepared for the potential consequences.
This latest ByteWise Insights article highlights the grave dangers of a divided regulatory world. It demands global leadership to foster innovation while putting in place safeguards to protect society from criminals, predators, phishers / scammers, fraudsters etc. to reduce the risks (P.S., I just love Daisy, O2’s new AI Head of Scammer Relations!).
I sounded the alarm on this issue early in 2024, proposing some potential solutions to discuss and debate. But the fractured international community offers little hope. Ignoring this responsibility could lead to a future where short-term economic gains outweigh the long-term well-being of citizens, leaving policymakers perpetually chasing a moving target.
“To be human in a world of AI in 2024 is to be sat right on a hinge of history”
I am a strong proponent of technological advancement, but I believe that such progress should be accompanied by appropriate regulation to enable growth and innovation whilst reducing the risks. This is particularly the case for generative AI. We can all rattle off a list of AI’s potential benefits, and most people still haven’t grasped the full extent of its revolutionary power. As with any technological leap, there will be winners and losers, and society will need to uphold the principles and values it cherishes (e.g., quality news and journalism).
One particularly insightful perspective that caught my eye recently was from Richard Sargeant, Managing Director and Partner at Boston Consulting Group. In a recent speech to UK parliamentarians, he aimed to highlight the human element of AI and its potential to transform our lives, work, and the world we live in, concluding that it could be a catalyst for economic growth, wage increases, and a more peaceful future.
His vision is inspiring, as are those of many others. The importance of AI in the delivery of essential public services is going to improve so many people’s lives. For example the UK Government is allocating £32m to fund 98 AI projects aimed at improving public services, such as speeding up NHS prescriptions and reducing train delays.
AI will also play a significant role in tackling online safety challenges, such as testing an organisation’s compliance with regulatory obligations or the delivery of innovative solutions to meet requirements such as for age assurance. But we must also heed the warnings, like those from Professor Hawking, and address the potential risks and how they are being addressed.
‘Tis the Season for AI Regulation
Policymakers worldwide are actively engaged in developing comprehensive frameworks to address the potential risks associated with AI. While existing legal models provide a foundational basis, the advancement of AI is leading to the formulation of specific regulations.
Here’s a quick glance on the current state of play in some key markets:
- The European Union (EU) leads the way with its ambitious AI Act, categorising AI systems by risk level and imposing tailored requirements. This approach, though criticised for being over-restrictive and thwarting innovation, aims to balance growth with necessary restrictions. Nevertheless, as with the General Data Protection Regulation (GDPR), the law is providing a framework for others to emulate.
- The United States has taken a more fragmented approach, with various agencies regulating different aspects of AI. President Biden’s executive order signalled a push for a comprehensive framework, though the new political climate may influence its pace. More on this later.
- China’s AI regulations attempt to establish global standards but are likely to be met with scepticism in the ‘West’ due to the state control of information.
- The UK, eager to attract AI investment post-Brexit, is also actively engaged in AI regulation. The Online Safety Act addresses the potential harms of generative AI. Specific AI legislation was expected earlier in the year and is still likely in 2025. However, many companies operating in the UK will likely need to comply with the EU’s AI Act.
- While one of the earliest proposed AI laws, Canada’s Artificial Intelligence and Data Act (AIDA), remains locked in the legislative process.
- Likewise South Korea’s Artificial Intelligence Industry Promotion Act is yet to pass into law although, in contrast to the EU’s AI Act, this seems to be based upon the principles of technology adoption first and regulation later.
- Other countries like Singapore, Japan, and Australia have adopted guidelines and frameworks to promote ethical and responsible AI development but lack specific laws for now (although some Australian politicians are advocating them).
Is Regulatory Divergence the Inevitable Future?
The digital age has ushered in a new era of geopolitical competition, marked by a growing divergence in regulatory approaches. As nations strive to protect their national interests and spur economic growth, the trend toward ‘cyber-patriotism’ or ‘digital protectionism’ has intensified. This is particularly evident in the realm of AI, where countries are developing sovereign AI models to gain a competitive edge. The absence of a unified, global approach to AI regulation raises serious concerns.
The recent US presidential election has further exacerbated global geopolitical tensions. President-Elect Trump’s pro-innovation stance, favouring minimal AI regulation, stands in stark contrast to the EU’s more precautionary approach. While not exclusive to AI, the US has a unique opportunity to lead the world in AI governance and regulation. However, the impending political climate in the US, coupled with a preoccupation with China’s AI advancements, may result in a global landscape where other nations independently develop their own solutions to balance safety, innovation, and competition.
A pessimistic interpretation might argue that many governments around the world might be deliberately sacrificing robust AI oversight for the sake of technological supremacy, disregarding the potential for unintended harm. Divergent regulatory frameworks further compound this problem.
Talk the talk, walk the walk…
The concerns raised by Stephen Hawking are echoed by other scientists and even technologists worldwide. As our digital world continues to expand, the imperative for technological leadership and international cooperation on AI regulation becomes increasingly urgent. The UK’s AI Summit in 2023, culminating in the Bletchley Declaration, sets an important precedent for such collaboration. The South Korea summit in May 2024 furthered this momentum, with a planned summit in France in February 2025 aiming to establish an “effective and inclusive international framework for AI governance.” There’s also great work being done by the WeProtect Global Alliance and the Internet Governance Forum (IGF).
The World Economic Forum (WEF) is calling for a unified approach. This is crucial and it must translate into concrete, common principles and standards. A fragmented regulatory landscape will stifle innovation, exacerbate geopolitical tensions, and ultimately undermine the potential benefits of the digital age. We must walk the walk.