Interviews, insight & analysis on digital media & marketing

Cassius Naylor, Outvertising: A mark of provenance – centring human creatives in an AI revolution

By Cassius Naylor, Co-Director of Advocacy, Outvertising

Like many, I recently found myself grinning at a delightful video online, of a cat moving majestically through a garden full of flowers. Amusement turned to awe and then to unease when I saw that it had been generated by OpenAI’s new text-to-video model, Sora.

This might be a Pavlovian response from months of very public hand-wringing about AI-generated disinformation, but at that moment I was feeling for the thousands of young LGBTQ+ creatives watching these demonstrations and fearing for their livelihoods. How easily this instrument could be leveraged to produce content for ad creative, I thought. How many artworkers, actors, set designers, or runners could be left by the wayside, their salaries requisitioned for the cost of a corporate licence.

Modal shifts from innovation require obsolescence as a rule, many would say, and yes it’s a fool’s errand to try and resist forces of this magnitude. No one wants to be the one advocating restraint in the face of what will probably become the most significant technological revolution of our lifetimes. Even UNESCO’s 2021 ethical AI guidance is at pains to strike the balance between maximising equity and minimising “hinder[ance of] innovation or disadvantage [to] small and medium enterprises or start-ups”.

However, I’m not alone in noticing that these technological key changes are getting more and more rapid, and we are at serious risk of allowing our human systems of governance to be outpaced. We already failed to strike this balance with social media in the 2010s, to notable effect. This shift would be an order of magnitude more significant.

I’m not saying we’ve arrived at the edge of this cliff yet. We have the opportunity to pull back from the unknowable consequences of this growth, and build guardrails to preserve the dignity of any system’s most important component: the human beings involved.

In particular here, I am proposing two things: a stakeholder-centric model of AI implementation, and a practical initiative to begin achieving it. Today I only have space to talk about the action, but I speak about its philosophical underpinnings elsewhere .

As a practical step to protect creatives through the AI revolution, what I’m suggesting is this:  agencies should be able to verify that their visual assets are 100% (or at least predominantly) human-generated through an accreditation like the Fairtrade mark. I’m going to provisionally call this the ‘Handmade’ mark.

The Handmade mark should be independently awarded to campaigns upon certification of the assets, and advertisers should be able to access a directory of agencies with the highest use of Handmade creative. They might go so far as to build requirements for the certification into supplier procurement. This is not to say that many agencies won’t or indeed shouldn’t use AI for their video or image assets, that’s to be expected. That output will, however, be to Handmade work what Sainsbury’s Basics is to Taste the Difference.

Of course, it’s not enough to just will this into being with a mark of provenance. We – producers, commissioners and consumers of creative work – have to embrace the same trade-off with Handmade creative as with Fairtrade goods: higher production costs, higher end costs, but ultimately higher contribution at a time when it is most vital. We have to seek out and celebrate Handmade work and the people who make it, and establish a fundamental quality premium inherent to it being Handmade. In doing so, I think we can help to smoothen the transition for creative professionals into a new way of working, one that we know is coming. As the Aspen Institute notes, the “conversation needs to be reframed to show how wins and losses are shared across society”.

There is a logical justification for this: as I was told by a source within a major AI innovator (whom I cannot identify as their firm keeps strict control of who can comment on AI on its behalf):

“The best AI-generated content you’ll get will be average at best because it’s based on all of the data that went before it. Neither will it be particularly unique as anyone can query the same data and receive similarly generated outputs”.

“If the time saved by AI automation and generation is only used for human resource reduction, the quality of outputs will decline and the risk for brand damage will increase. Time saving should be re-invested in the unique creative humans who conceive the ideas, then use the AI toolsets to realise them, and can finesse and quality assure the outputs to meet their audiences’ needs and their businesses’ intents.”

Another thing that needs to happen is for firms planning AI integrations to undertake some form of Ethical Impact Assessment, in line with UNESCO’s recommendation that “such  impact  assessments  should  identify  impacts on human rights and fundamental freedoms, in particular  but  not  limited  to  the  rights  of  marginalized  and vulnerable people or people in vulnerable situations, labour   rights,   the   environment   and   ecosystems   and   ethical   and   social   implications”.

There remain many questions to answer in this space. There is a foundational issue presented by bias in the very datasets that are used to train these models, replicating that inequity in the outputs. This is particularly notable for minoritised communities, for whom representative data is not as readily accessible. There are questions as to the risks of AI application to the ends of dis/misinformation, mass surveillance, fraud and abuse. All of these issues need to be addressed satisfactorily before we dive into a potentially bottomless sea of opportunity. In this piece I’ve attempted to ask and answer just one of those questions, related not as much to the development of equitable AI technology, but to the deployment of that technology to equitable ends.

AI must serve human flourishing, not merely replicate and exacerbate existing inequities. In order to ensure that, we have to build equity into the heart of how we use it, re-centre the human in the model, and try to walk – for once – before we run.

Related articles