Interviews, insight & analysis on digital media & marketing

Adam Chugg: It’s time to diffuse the social media bomb

By Adam Chugg, Head of big tech activations, the7stars

If algorithmic social media was the advent of the automatic weapon for societal division, I fear generative AI could be the advent of the nuclear bomb. More misinformation, but faster and even more compelling. My worry is that we’re sleepwalking, again, into the unintended consequences of such technology driven by our collective hamartia, our desire for growth.

Generative AI is already transforming the advertising and marketing sectors through enhancing creativity, improving efficiency, enabling personalisation, assisting in data synthesis, the list goes on. But there are still important lessons to be learned from the negative consequences we’ve seen from the ad funded internet.   

Despite the rapidly advancing AI arms race, there appears to be a lack of will to learn from past mistakes. Rather than addressing existing issues and exercising caution, we seem to be hurtling towards integrating generative AI into the attention economy. I’m keen to focus on the potential solutions. Specifically, the solution to the weaponisation of algorithmic social media. 

Earlier in March, Meta launched its verified subscription service in the US. When first announced, Meta listed the following benefits:

  • A verified badge, confirming you’re the real you and that your account has been authenticated with a government ID.
  • More protection from impersonation with proactive account monitoring for impersonators who might target people with growing online audiences. 
  • Help when you need it with access to a real person for common account issues.
  • Increased visibility and reach with prominence in some areas of the platform – like search, comments and recommendations.
  • Exclusive features to express yourself in unique ways.

The exception, for now, being increased visibility and reach, which has been removed.

I can see several benefits to such verification:

  • Less pressure on advertising as a sole revenue stream
  • Greater provenance of content
  • A less effective spread of misinformation

Of course, it’s going to take a lot of traction to realise these benefits. I’m not for once suggesting tech platforms can, or will, substitute their ad funded business models for subscribers. And verification isn’t the silver bullet for all these issues. But at least the model moves us in a better direction.

At its core, verifying individuals increases trust with users. For one, by verifying your profile, you’re letting other users know you’re willing to put your money where your mouth is. Part of what emboldens trolls and misinformation spreaders, powerful in their aggregation, is their anonymity. Anything that makes a user think twice about their content, including verification, seems like a good thing.

Unfortunately, I think priority of reach for verified users could be the most significant feature in the battle against misinformation. Unfortunate because Meta have shelved this, for now.

I thought this when Elon Musk launched twitter blue in his own effort to deal with bots and troll farms, the industrial scale propagators of misinformation. And, despite the haphazard application idea initially, I still believe it has potential. 

The vision is that users will find it easier to find the provenance of content and will place less value on content from unverified sources. More restricted reach for unverified profiles should, in addition, mean that propaganda farms, like that of Russia’s infamous Internet Research Agency, find it harder to achieve their aims.

And there would be a huge benefit for advertisers too – a verified audience! There’s no doubt that advertisers would see better performance from targeting verified audiences, provided that they scale. This is essentially a robust guarantee of reaching genuine, unique profiles. Imagine being able to verify who you’re talking to. Revolutionary stuff. Platforms would also see a benefit from advertisers willing to pay more for them.

Overall, the focus shifts from quantity to quality.

Without any such verification there’s always a question mark over who you are really talking to.

Just this week, TikTok were fined after the ICO estimated that up to 1.4m children under the age of 13 were using the platform in 2020 and, in using their data to algorithmically serve content, breached UK law. 

Of course, you have to be at least 13 to use most social media platforms. But what self respecting 12 year old with a smartphone is going to report their genuine DOB during sign up? This is why TikTok does not accept alcohol or gambling brands as advertisers, since strict age gating is not possible. 

In many ways you can validate audience quality through business results, or lack of. But a verified audience provides confidence. 

Of course, verification is not without its pitfalls. There may be privacy concerns around sharing PII with platforms or governments. There may be disadvantages to minority groups who find a voice in anonymity. There may also be legitimate concerns around creating an economic hierarchy around freedom of speech.

But that shouldn’t stop us exploring the premise and seeking the correct alternatives. My colleague Jonathan Harrison pointed me in the direction of Open-AI co-founder Sam Altman’s World ID project, a privacy first “proof of personhood” for the internet, which addresses these concerns.

I’m an advocate of introducing greater provenance into the digital ecosystem. The benefit of verification to advertisers, society and democracy seems significant. If that sounds hyperbolic, it shouldn’t.

It’s a lack of provenance and authentication that leave us vulnerable to manipulation. Having this supercharged with misinformation produced in even larger quantities by generative AI, feels like a big risk. Tech companies need to take this responsibility seriously, and this requires objective oversight.