Interviews, insight & analysis on digital media & marketing

How to sharpen up your content output while staying on the right side of the AI line

As AI and content continue to face scrutiny, should we fear AI’s role, can it be trusted and who should bear responsibility for its impact? Blaise Hope, CEO and founder of Origin Hope, delves into the shared responsibility of stakeholders and explores practical steps to harness the positive power of AI in content creation while safeguarding reliability, trust, and accountability

AI will become a powerful driver for how we create content. Tools like ChatGPT and Bard have demonstrated to the world at large the potential for AI to revolutionise the process of creation, saving time and enabling a previously impossible manifestation of ideas. They allow direct traversal between ideas without interpretation or physical limitations. 

While the two most famous AI tools are not great examples of a professional deployment given their tendency to plagiarism, inaccuracy, and manipulation, they do illustrate the productive capacity AI can unlock. 

Despite the current hype, generative AI tools are not ‘new’; they’ve existed for some time. The recent advances, however, are down to applications of computing power, time and accessibility. As a result, we’re seeing a tsunami of content flooding the internet, but the earthquake started years ago. 

The questions AI raised in 2023 – how can it help, should it be feared, should it be trusted, will it last, should it be regulated, who should be responsible? – are important ones and particularly relevant in creation. 

The Scrutiny of AI and Content

The good news is that general purpose AI tools are nowhere near as valuable and effective as specialised AI deeptech stacks that handle a specific function. This also means they are nowhere near as dangerous as they are made out to be, and it would be pretty hard to make a dangerous deeptech stack we would not be able to defend against. 

However, just as we saw with the arrival of social media, there is the potential for damaging waves of misinformation to spread, and that can have far-reaching consequences. 

Take Instagram for example; it’s filled with people altering their physical appearance beyond a simple filter. These tools enable people to don different facial features, ages, and even genders even more smoothly than any filter ever could. Twitter sleuths zealously race to verify or debunk photos as they go viral. This all gets headlines, but it does not really represent risk. 

Risk is allowing fake news to proliferate among communities that lead to acts of violence, such as with the Rohingya in Myanmar. 

There is a big misconception surrounding AI. No matter how high-end it is, the tools are not intelligent, they’re not sentient beings. They are storage trained to predict the expected response based on countless actions and word choices over time. 

Even if we train it in decision-making, we merely train it to mimic the decision-making that humans do, meaning it won’t be capable of doing something entirely new. Its capabilities might seem scary, but behind every tool is a wielder.

Humanity’s essence lies in our capacity to create something new and adapt ideas in a split second. AI can’t replicate human creativity, but it can push the point at which we flex our creativity further from where it used to be. 

AI also means we will have to be more creative, not less. Remember when everybody thought deepfakes meant we would never be able to tell fake from real anymore? That lasted about a week, and though deepfake tech has accelerated like everything else, people can tell almost immediately and dismiss it. 

Shared Responsibility in Content Creation

AI algorithms can only identify what they have been trained to recognise, and they are prone to bias and errors. When they make an “intelligence” leap and decision outside their bounds, they only do so as a result of continued mathematical modelling. That is not the same as making a “new” thought – it has defined new entirely by what is old that it can see and fit into its programming. 

Social media put the onus on people to use it responsibly. AI does not mean an end to the era of ethics and responsibility, it just means a different place to apply them. 

The process of creation itself is entirely something private. In the world of AI-generated content, everyone is the creator, the proofreader, the fact-checker, the editor, and the moderator – it is important to have this kind of mindset and avoid being spoon fed by the tools that you are supposed to be using. The audience can tell and any short-lived success dies rapidly. 

Yet, responsibility – for ensuring that the line between fact and fiction is not blurred – does not sit with one guardian. It should be shared across all stakeholders involved beyond content creators. 

The government, social media platforms, big tech companies and fellow users should share the burden of mitigating the risks and moderating AI-generated content in the way they do for everything else. 

Embracing AI’s Power with Caution

What practical steps can those working in the digital industry take to embrace the positive power of AI in their content creation while staying acutely aware of the pitfalls?

Put simply – you are the author and if you use an AI to do something naughty, that is on you. It may take a while for policymakers to agree on this as it requires understanding of a subject many people are reflexively scared of. 

More advanced tools eliminate more constraints of time, effort, and space – much like we produce memos faster with printers than a printing press. However, the process doesn’t start with the tools; it starts with the humans in your organisation.

To fully harness advancements in AI, you need reliable processes that involve humans at their heart and at the base level you want to let AI automate repetitive tasks and do any complex calculations, but leave decisions and direction to humans. 

With automation, big chunks of data from multiple clusters can be condensed into one brief content that humans can easily skim and scan for errors. For AI to do the last part of that process is incredibly complex and imperfect, as categories and definitions shift imperceptibly and all the time, beyond just spelling or obvious mistakes. 

Start with establishing transparency and accountability in your process; make sure everyone knows they are responsible for any content they produce. Have robust oversight and ethical guidance, then maintain the momentum of those processes.

When the entire human-AI machine starts to run by itself, start pushing its capabilities and your own product further. This last part is important, use the freed capacity you or your team gained due to the automation of repetitive tasks to analyse your content’s performance and do an even BETTER job.

Opinion

More posts from ->

Related articles