Interviews, insight & analysis on digital media & marketing

I’ll be your mirror, reflect what you are, in case you don’t know

By Louise Hooper, Human Rights Lawyer

TL;DR: It’s not the AI; it’s us.

An AI is a machine trained by humans to do tasks defined by humans based on data created by humans. When we talk about ‘the AI’ being racist or ‘the AI’ being sexist, we really mean humans. 

AI offers extraordinary opportunities across diverse fields such as healthcare, education, entertainment, manufacturing, agriculture, climate and the environment. With these rapid advances and opportunities comes both technical and human error and a great deal of doom-mongering about the demise of the dominance of man and the existential threat to humankind. 

I am less pessimistic. We can use AI as a mirror to identify, interrogate and address existing social bias and discriminatory patterns and ask more significant questions about what we want next for the world we live in. Whether we do that and how ‘responsible’ we are about that task remains to be seen. Below I outline three key suggestions for reducing the impact of human history on the future fairness of AI models:

Firstly: Critically analyse the outcomes of existing predictive tools.

Information is not neutral. Instead, it relies on data points and assumptions input or created by humans. Unless care is taken, there is a risk not so much that history repeats itself but that we rewind and get stuck in the past.

Conducting my own small experiments using ChatGPT in various guises, I have been dismayed that even the most basic questions tend to result in the erasure of women and people of the Global Majority.

Example 1:  Prompt: ‘Create an AI reading list’. This returned 15 results, of which only one, ‘Weapons of Math Destruction’, was written by a woman (Cathy O’Neill). 

Prompt: ‘Create a reading list of books by women’. This returned a list of 10 books. Nine were by women and one by a man, but a woman had written the foreword. 

Example 2: Prompt: ‘Who are the top 1% of thinkers?’ (regenerated four times). Out of a total of 31 names returned, two were women. All the thinkers were white and either American or European (mostly dead Greeks and Romans).

The danger is not simply the absence of women and people of colour from the lists returned but also that it reinforces the impression that these are the ‘top’ thinkers or the ‘best’ books, which in turn makes it less likely that work by women and the Global Majority will be found, seen and read.

Secondly: Listen to diverse voices

Skewed outcomes is not news. For years, women, predominantly women from the Global Majority (Virginia Eubanks, Ruha Benjamin, Safiya Noble, Timnit Gebru, Wendy Hui Kyong Chun and Cathy O’Neill amongst others), have been identifying bias and discrimination in outcomes produced by artificial intelligence and arguing that more must be done to prevent new systems further entrenching discrimination in our societies. Perhaps if we had listened in the first place, stepped back to think harder about the issues raised and consulted amongst a wider group of stakeholders, better outcomes would have been achieved, and some of the current state of panic and existential doom would have been lessened.

Thirdly: Conduct ethical and human rights risk assessments at key stages, including design, development, deployment and decommissioning.

For a human-centred approach to reduce the social, environmental and existential risks posed by artificial intelligence, conduct ethical and human rights risk assessments at key stages, notably: design, development, deployment and decommissioning. These should be independent, cost-effective and not complex to administer. 

When we look in the mirror of the future we have a choice – to see the truth and take action or to blame the mirror.   

If it’s not ethical or it poses risks to humans or the planet, don’t do it. 

Don’t miss the opportunity to join me and others at “The Good, Bad, and Ugly of AI” breakfast event series on Thursday 25th May, from 08:00-12:00 at Soho House White City, London. This event, sponsored by New World Tech, will bring together experts from various industries to explore the impact of AI on our world. We’ll delve into the Good, the Bad, and the Ugly sides of AI and discuss its applications, ethical concerns, and the evolving workforce landscape. Places are limited, so please book your ticket as soon as possible and enjoy a delicious breakfast and scintillating conversation and debate. 

https://www.eventbrite.co.uk/e/fearless-if-ai-is-the-answer-what-are-the-questions-tickets-619028961287

This event results from a collective effort of three highly skilled women spanning three decades working in technology and creativity. Katie Bell at Aligned Studios, Nicole Yershon at The NY Collective, and Emma Jackson at The 5Gs in collaboration with New World Tech

Update: Tickets are sold out, so please visit https://www.alignedstudios.io/fearless to be first in line to get tickets for subsequent events – the next one is in July at Home Grown, London – the private members club for entrepreneurs and investors.