The Impact of Toxic and Harmful Content on Brands, Their Teams and Customers
Online toxicity can be damaging for brands, impacting the well-being of their frontline staff and creating a real commercial impact for brands if their customers are exposed to it. So, how can companies work to alleviate the negative effects?
Here, Matthieu Boutard, President and co-founder of Bodyguard.ai, outlines the benefits and challenges of content moderation and explores how companies can take a blended approach to achieve the best outcomes.
With the Online Safety Bill set to come into UK law in the coming months, much attention has been paid to the negative impact of social media on its users.
The goal of the bill is to deliver upon the government’s manifesto commitment to make the UK the safest place in the world to be online. However, it will need to strike a critical balance to achieve this effectively.
According to the Department for Digital, Culture, Media and Sport (DCMS), it aims to keep children safe, stop racial hate and protect democracy online, while equally ensuring that people in the UK can express themselves freely and participate in pluralistic and robust debate.
The bill will place new obligations upon organisations to remove illegal or harmful content. Further, firms that fail to comply with these new rules could face fines of up to £18 million or 10% of their annual global turnover – whichever is highest.
Such measures may seem drastic, but they are becoming increasingly necessary. Online toxicity is rife, spanning all communications channels, from social media to in-game chat.
In exploring the extent of the problem, we recently published an inaugural whitepaper examining the online toxicity aimed at businesses and brands in the 12 months that ended July 2022.
During this process we analysed over 170 million pieces of content across 1,200 brand channels in six languages, finding that as much as 5.24% of all content generated by online communities is toxic. Indeed, 3.28% could be classed as hateful (insults, hatred, misogyny, threats, racism, etc), while 1.96% could be classed as junk (scams, frauds, trolling, etc).
Three Key Challenges of Content Moderation
Unfortunately, the growing prevalence of online hate and toxic content is increasingly seeping into brand-based communication channels such as customer forums, social media pages, and message boards.
For brands, this can have a significant commercial impact. Indeed, one study suggests that as many as four in 10 consumers will leave a platform after their first exposure to harmful language. Further, they may share their poor experience with others, creating a domino effect of irreparable brand damage.
It is therefore important that brands moderate their social media content to remove toxic comments. However, doing this effectively is no easy task, and there are several potential challenges.
First, it can be a highly resource-intensive and taxing task to complete manually. A trained human moderator typically needs 10 seconds to analyse and moderate a single comment.
Therefore, if there are hundreds or thousands of comments posted at the same time, it can become an impossible task to manage the flow of hateful comments in real time. Resultantly, many content moderators are left mentally exhausted from the volume of work.
Additionally, being repeatedly exposed to bad language, toxic videos, and harmful content can have a psychological effect on moderators. Indeed, the mental health of these individuals cannot be overlooked, while further burnout from toxicity can be costly to businesses, potentially accelerating employee turnover.
Thirdly, companies need to tread a fine line when moderating to ensure they aren’t accused of censorship. Brand channels such as social media are often a primary source for customers engaging with brands, providing their feedback and holding brands to account. Those that give the impression that they are simply deleting any critical or negative comments may also come under fire.
A Blended Approach for Balanced Outcomes
Fortunately, AI and machine learning-powered technologies are beginning to address some of the challenges facing human moderators. However, there are further issues that need to be ironed out here.
Machine learning algorithms currently used by social platforms such as Facebook and Instagram have been shown to have an error rate that can be as high as 40%. As a result, only 62.5% of hateful content is currently removed from social networks according to the European Commission, leaving large volumes of unmoderated content out there that can easily impact people and businesses.
What’s more, these algorithms also struggle to manage the sensitive issue of freedom of expression. In lacking the ability to detect linguistic subtleties, they can lean too far on the side of censorship as algorithms are prone to overreacting.
With both human moderation and AI-driven solutions having their limitations, a blended approach is required. Indeed, by combining intelligent machine learning with a human team comprising linguists, quality controllers and programmers, brands will be well-placed to remove hateful comments more quickly and effectively.
Of course, selecting the right solution here will be key. Ideally, brands should look to adopt a solution that is advanced enough to recognise the differences between friends interacting with “colourful” language, and hostile comments directed towards a brand.
Striking this balance is vital. To encourage engagement and build trust in online interactions, it is crucial that brands work to ensure that toxicity doesn’t pollute communications channels while also providing consumers with a platform to criticise and debate.
Thankfully, with the right approach, moderation can be effective. Indeed, it shouldn’t be about prohibiting freedom of expression but preventing toxic content from reaching potential recipients to make the internet a safer place for everyone.