by Alistair Dent, Chief Strategy Officer at data consultancy Profusion
Ever since Open-AI’s game-changing ChatGPT hit the scene with its remarkable (though not entirely faultless) ability to produce human-like text it would seem it is all anyone can talk about. And it’s easy to see why. With a little editing, the breakthrough chatbot can be used to do everything from writing emails and proposals, through to creating poetry and code. In fact, it’s even able to pass some graduate-level exams. The result is a huge opportunity for businesses to automate key processes, streamline and enhance overall operations.
But, while there is certainly no disputing the remarkable potential generative AI offers for innovation, efficiency and even creativity, it’s widely documented that it also comes with inherent risks. Principally, one key concern – as with the wider AI category – surrounds the risk of inherent bias which could lead to the perpetuation of harmful stereotypes or discrimination.
Individuals could damage privacy and pose a risk of data breaches or identity theft. Ethically too, there’s the associated potential for inaccuracies, misinformation and even disinformation. For business leaders this naturally raises a lot of questions. If generative AI is an undoubtedly powerful and rapidly developing field – how can it be used both effectively and ethically?
The first thing to do is to put everything in perspective. Much of the discussion around AI is speculation and hype. Currently, although very impressive, ChatGPT and other generative AI apps are a long way from being able to do even a small percentage of what humans are capable of. They are also far from flawless. The risk of Skynet being created tomorrow is negligible. This means that when we speak about the average business using generative AI ethically, we are not talking about the big, world-changing risks – we’re talking about the small, complex actions businesses will regularly take that, if mishandled, could have undesirable consequences. These decisions can soon stack up to have big implications for a business and society at large.
As AI is developing at such a pace, businesses simply can’t rely on regulation to fully guide them. The law cannot keep up. We saw earlier in the year that the EU’s AI act had to be hastily redrafted because legislators were completely blindsided by the launch of ChatGPT. This pace of development also means that creating your own ethical framework needs to happen now – even if you do not currently have plans to use generative AI. The longer you delay the more difficult it will be to create an ethical decision making culture within your organisation.
Getting started
Data ethics is not a checklist of do’s and don’ts. It’s the creation of guardrails and principles that underpin an ethical culture which should equip decision makers with the knowledge and expertise to make the right judgement calls when presented with challenging moral issues. A company’s approach to ESG plays an outsized role in determining whether it will have the tools to use data ethically. There’s a very simple reason for this. A diverse team is able to leverage all of its experiences and perspectives to anticipate how your use of data will impact different groups. One of the clearest risks of using generative AI is that it will be biased against a particular group of people. This is an issue that has already impacted a lot of companies when they use data and design algorithms.
Accountability and transparency
The next step is to look at the structures and policies that will enable ethical decision making to happen in practice. Accountability is a key aspect of this. You need someone who is ultimately responsible for holding your organisation to its self stated ethical standards.
There is some debate as to who is best placed to have this task. For some companies that may be the Chief Data Officer, but this has the potential for conflict of interest (see marking their own homework), others choose the Chief Compliance Officer – however – ethics goes beyond legal compliance. Personally, I think the Chief Executive will often be the most logical fit, especially for smaller companies. Whichever individual oversees your ethical policy it’s essential that they are empowered – both to make critical decisions and to hold colleagues to account should they fail in their ethical responsibilities.
Aligned with accountability is transparency and trust. Your team and your customers or clients need to know how and why you do and do not use AI for particular purposes. Communicating your values and decision making in clear and understandable language is key.
Your actual ethics
Putting pen to paper to outline your ethics is the relatively easy part of this endeavour. It should be in harmony with your company values and be framed in a way that it will support your organisation not impede it. Think of it from the perspective of ‘what should you do’, rather than ‘what you shouldn’t do’. There are resources online that can help support you on this journey.
For example, we have collaborated with Pinsent Masons and a host of data academics and experts to create a free ethics guide that provides a lot of practice advice.
Data education
It is impossible to comprehend the ramifications of generative AI if you do not have a basic understanding of how it works.
This knowledge needs to be shared throughout an organisation for a few reasons. One, nearly every member of your team will end up using AI or the outputs of data science to undertake day-to-day tasks. Two, having this expertise siloed in your data team creates both bottlenecks and also runs the risk of this team ‘marking their own homework’ with little oversight. And finally, innovation can come from any part of your organisation. Team members will be better able to responsibly apply generative AI in new and creative ways if they have been upskilled on data.
It’s also important to note that training is not a once and done exercise. Knowledge can be easily lost or become obsolete. Running annual, or ideally, bi annual training sessions for your team will help to ensure your culture is maintained.
There’s no doubt that ChatGPT is just the tip of the generative AI iceberg, as tech giants continue to develop new models and our reliance on them grows. With this it becomes vital that businesses address the ethical risks associated with their use and ensure they have the right procedures, education and training in place to ensure they are able to use the technology ethically, legally and responsibly. In a world where it’s becoming more important to be seen to ‘do the right thing’, this will help ensure they are able to harness the full benefits of this powerful technological evolution while securing peace of mind.