By the time you finish reading this blog, about 1,200 hours of video will have been uploaded to YouTube. That’s right – enough professional news reports, amateur dance routines and cats that make you laugh out loud to keep you occupied for 50 days. Some will go viral, some will sit dormant and almost all will invite mostly anonymous viewers to publicly post whatever they want about the content or any other topic they wish to comment on.
In an ideal world, every one of those comments would be reviewed before they appeared online. Misogynistic, racist and homophobic posts would be rejected. Embedded links to child pornography and terrorist recruitment sites would never see the light of day. The internet would be what its founders likely dreamed it would become – an open platform that promotes positive community building.
But this is not an ideal world. It is one where the world wide web contains at least 3.5 billion pages, 300 hours of video are uploaded to YouTube every minute and almost 5 billion videos are watched on the platform every day. It is simply impossible to monitor every piece of user-generated content that is uploaded to the internet and that creates an enormous headache for the countless companies that have an online presence.
The headache remedy? Content moderation.
Defined as “the practice of monitoring submissions and applying a set of rules that define what is acceptable and what is not”, content moderation is an essential tool for businesses operating in the online space. From encouraging customers to share product reviews on external websites to inviting material to be posted on one’s own social media pages, user-generated content can be hugely beneficial if it is constructive, engaging and ideally positive.
Of course it is not always so, as evidenced by the fact more than 30 million videos were removed from YouTube in 2019 for violating its community guidelines, while the same year Google Maps detected and removed more than 75 million policy-violating reviews and 4 million fake business profiles. Even the smallest of brands risk negative fallout if they do not take adequate steps to moderate user-generated content, with specific areas of concern including:
For anyone still in doubt about the importance of content moderation, consider that management consulting firm Everest Group is tipping the sector to reach up to U.S.$6 billion by 2022. That is a lot of content moderation, with one of five styles typically used to identify concerning material or posts:
Content moderation is increasingly turning to technology to tackle the scourge of inappropriate user-generated content, with robotic process automation and artificial intelligence able to provide companies with cost and time-effective measures to filter unacceptable material.
This is particularly so in the pre-moderation stage where techniques such as ‘hash matching’ (where a fingerprint of an image is compared with a database of known harmful images) and ‘keyword filtering’ (in which words that indicate potentially harmful content are used to flag content) flags content for review by humans. With human moderators at significant risk of being exposed to harmful content, AI is also proving useful in limiting the type of material they need to view directly by prioritizing content for review based on perceived harm or level of uncertainty.
For all its benefits, experts have acknowledged AI is not a silver bullet and the human factor remains a vital component of any content moderation strategy, hence why many companies are choosing to blend the best of both worlds when tackling the complex issue.
As more businesses drown under the weight of user-generated content, many are turning to skilled content moderators in countries such as the Philippines. The outsourcing sector has a ready-made workforce of senior, intermediate and junior moderators who are highly trained in responding to user comments, identifying inappropriate material, and removing spam comments, offensive posts and language. Most importantly, the likes of social media content moderators have a vested interest in maintaining the reputation of their clients’ businesses and promoting strong public images of their brands.
Outsourced content moderators also have access to software programs that help them find and highlight harmful content. From Microsoft Azure and WebPurify to CrowdSource and Inversoft, they are able to work hand-in-hand with automated systems of moderation to deliver the best results for on-shore businesses. Better still, low living costs in outsourcing hubs such as the Philippines mean companies can employ moderators for up to 70% less than they would pay in their local employment markets.
While user-generated content cannot be allowed to go unchecked, the online space undoubtedly presents endless opportunities for savvy businesses. Having gained a better understanding of the world of content moderation, why not also understand how to build a top notch omnichannel experience for your customers
Business growth and efficiency improvement hints and tips delivered direct to your inbox.
Outsourcing is where one company engages another company to provide services that were traditionally performed in-house. Deloitte’s Global..
When it comes to embracing change, the Business Process Outsourcing (BPO) industry has long prided itself on its ability to adapt and evolve. From..
In the early weeks of 2021, about 42,000 employers were asked to answer a simple question as part of an in-depth study into an issue believed to..