Interviews, insight & analysis on digital media & marketing

The ethics of artificial intelligence: who is responsible when things go wrong?

By Roy Chege of New World Tech.

By now, it’s not hard to imagine a world where robots are our companions, self-driving cars are the norm, and artificial intelligence runs everything from our homes to our healthcare. And it sounds like a dream come true, right? As a recent graduate in AI and a consultant for a leading tech company, I can attest to the amazing potential of AI technology. During the pandemic, I built an algorithm that used AI to predict future Covid outbreaks – and it peaked at 98% accuracy. I live and breathe AI and firmly believe in its ability to transform our world for the better. However, there is a darker side to this futuristic world – a world where machines make life-and-death decisions, and corporations have unprecedented control over our lives. In this article, we’ll explore some of the ethics of artificial intelligence and delve into the question of who is responsible when things go wrong.

AI systems are only as good as the data they’re trained on, and biases within the data can lead to unfair and discriminatory outcomes. For example, Amazon’s AI recruiting tool was found to be biased against women. At the same time, software used by US law enforcement agencies to predict future criminal behaviour was biased against African American defendants. These cases highlight the intertwined issues of gender and race bias in AI systems. AI developers must use diverse and representative training data to offset biased algorithms and regularly audit AI systems to ensure they are not perpetuating existing biases.

Another ethical concern is the development and deployment of autonomous weapons. These weapons can make decisions and take actions without human intervention, raising questions about accountability and unintended consequences. If an autonomous weapon causes harm to innocent civilians, who should be held responsible? The military commander who deployed the weapon? The government authority that authorised its use? The developers who created it? Addressing these questions is crucial for aligning the development and deployment of autonomous weapons with our moral and ethical principles.

The responsibility of AI developers and users is also an important consideration. When harm is caused, society expects those responsible to be held liable, but these questions become increasingly complex with autonomous systems. If a self-driving car causes a fatal accident, who should be held accountable – the manufacturers, the owner of the vehicle, or the AI system itself? Furthermore, when an autonomous car decides between harming a pedestrian or driving off a cliff and harming the passengers, who decides whose lives the car prioritises? How is this codified across the laws of different countries?

To address these and other ethical concerns surrounding AI, we must establish a framework for ethics. This framework should include the development of AI ethics boards, which would be responsible for setting ethical standards for AI development and use. Additionally, regulations should be in place that ensure AI systems are transparent, accountable, and responsible. This could include mandatory reporting of AI incidents, regular audits of AI systems, and the development of AI-specific laws and regulations.

As AI continues to evolve and become an increasingly important part of our daily lives, it’s crucial that we consider the ethical and moral implications of its development and deployment. The responsibilities of AI creators and users must be addressed to ensure that AI aligns with our moral and ethical principles to build the future we want and to shield against bad actors. Developing AI ethics boards and creating AI regulations are essential steps towards establishing a framework for responsible and ethical AI. However, this cannot be achieved without collaboration between AI developers, users, policymakers, and society. By addressing these issues now, we can safeguard a future where AI is used for the betterment of humanity rather than its detriment.

Don’t miss the opportunity to join me and others at “The Good, Bad, and Ugly of AI” breakfast event series on Thursday 25th May, from 08:00-12:00 at Soho House White City, London. This event, sponsored by New World Tech, will bring together experts from various industries to explore the impact of AI on our world. We’ll delve into the Good, the Bad, and the Ugly sides of AI and discuss its applications, ethical concerns, and the evolving workforce landscape. Places are limited, so please book your ticket as soon as possible and enjoy a delicious breakfast and scintillating conversation and debate.

This event results from a collective effort of three highly skilled women spanning three decades working in technology and creativity. Katie Bell at Aligned Studios, Nicole Yershon at The NY Collective, and Emma Jackson at The 5Gs in collaboration with New World Tech


More posts from ->