Interviews, insight & analysis on digital media & marketing

How to manage brand safety in the new world of community notes

By Ciaran Deering, Head of Online, The Grove Media

Meta’s announcement last month that it would be abandoning third party content checking in favour of a community notes model, similar to the approach introduced by X, no doubt had  alarm bells ringing in many marketing departments. The advertising industry has been working hard to drive brand safety and Meta’s decision seems to alter that course. 

Community notes are a user-driven system designed to allow individuals to add context to posts, enabling digital media owners to step away from content checking. Previously Meta’s approach to moderation had relied heavily on algorithms and partnerships with fact-checking organisations.

In addition to the move to community notes, Meta announced that it would be lifting restrictions on topic areas such immigration and gender. Meta’s Chief Global Affairs Officer Joel Kaplan said this was to avoid ‘overreach’. “Too much harmless content gets censored, too many people find themselves wrongly locked up in ‘Facebook jail,’ and we are often too slow to respond when they do,” he said in the announcement. 

Meta’s decision will certainly save them money and is in line with shifts in policy and culture in the US. But the jury is out whether the new system will be effective. X replaced fact-checking with community soon after Elon Musk purchased the company in 2022. However, the platform has subsequently haemorrhaged users and lost ad revenue – last year Kantar reported that 26% of advertisers planned to cut adspend on X. And research from The Washington Post found that community notes were failing to stop falsehood and misinformation on X. 

What community notes means for ads on Meta

The good news is that Meta has said that community notes won’t apply to ads, so users won’t be able to annotate any paid ads. When X shifted to community notes, one of the reasons it lost ad revenue was because users could leave notes on paid ads. 

However, Meta’s policy doesn’t entirely ring face brands from negative or damaging comments. An influencer promoting a brand could be subject to community notes, and ads placed near controversial organic content could still face community backlash, although this has always been an issue on social media. 

So advertisers are right to have concerns about brand safety on Facebook and Instagram with this change in content moderation. Not surprisingly, Meta’s had meetings with the biggest advertisers to reassure them of their commitment to brand safety and suitable ad placement. 

There have been some murmurings of advertisers looking for alternative platforms which provide stricter brands safety controls, but none of our client base at The Grove are looking to do this. Regardless of the move to community notes, Meta is in a strong position – they offer advertisers significant reach, impressive data-driven targeting opportunities and historical results to prove their worth. 

Minimise risk and follow best practice on Meta

For those advertisers keeping Meta on their schedules, but nevertheless concerned about brand safety, there are important steps you can take to minimise risk and ensure you are following best practice when it comes to audience and ad placement suitability. 

Develop or revisit your brand suitability strategy so you’re clear on the kinds of content you want to avoid. Continue to add targeting exclusions where relevant. However don’t overdo it – some exclusions are more worthwhile than others. For example, if you’re selling an animal-origin product. you may want to exclude PETA (People for the Ethical Treatment of Animals), Royal Society for the Prevention of Cruelty to Animals, STOP Animal Cruelty etc.

There are many other exclusions you can investigate – by content type, such as live videos; and by topic, eg politics. Rather than implement all available exclusions within these categories, our recommendation is to maintain the ‘moderate inventory’ filter which automatically excludes excessively controversial content. 

The other filters are ‘limited’ and ‘expanded’. Expanded inventory is too broad and permits ‘tragedy or conflict’, ‘objectionable activity’ and ‘sexual or suggestive’ content. Important to note that as of Feb 24th, for in-stream video, Facebook Reels and the Meta Audience Network, Meta will change the default setting to ‘expanded’. Our advice is to manually reset this to ‘moderate’. We also recommend excluding live videos where the opportunity to spot controversial content is limited.

Make sure you regularly check  delivery reports and partner-publisher lists if you are using the Meta Audience Network (which extends ads to a wider list of publishers which are not Meta owned). This will ensure you have an understanding of the nature of the content your ads appear against and allow you to react if anything is unsuitable for your brand.

If you are extending Meta campaigns to Audience Network, implement publisher blocklists – these can exclude website domains, and pages and profile URLs on Instagram and Facebook. Our advice is to download your current partner-publisher list, identify what you want to block and upload your list back into Meta ads for exclusion. Meta has a dedicated Brand Safety and Suitability Center.  

Brand safety tools can help to identify and minimise risk

For advertisers with sufficient budget, third party brand safety tools such as Moat, Integral Ad Science, DoubleVerify and Adloox integrate with Meta Ads and provide advanced filtering, verification, and reporting beyond Facebook’s built-in safety measures. When choosing a brand safety tool, advertisers need to be clear on what they are looking to achieve. Is it efficiency (pre-bid segments), or granular and simple reporting, or publisher integrations? You need to establish if you want to monitor or block, and decide whether you want a tool that is scalable or whether you just require custom categories. One thing to bear in mind is that upholding brand safety in a user generated environment will always be more challenging than publisher environments. And striking the right balance between brand safety and reaching your key audiences is a complex task – particularly when content verification is democratised and doesn’t follow a framework set collectively by the industry.