Get in Touch

Strategies for Improving Social Media Content Moderation

Social media has democratized the spread of information and ended the once-centralized power of the mainstream media. This has been a major boost for free speech. With that right to express oneself,...

 Toggel Table of Contents

Social media has democratized the spread of information and ended the once-centralized power of the mainstream media. This has been a major boost for free speech. With that right to express oneself, however, is the responsibility not to spread misinformation. Social media content moderation is designed to keep misinformation or other harmful content in check while protecting free expression.

In recent years social media has expanded exponentially, and with that growth, the risk of false information and hate speech has intensified. According to consumer data company Statista, as of May 2021, 43% of millennials in the USA use social media as a daily news source. In some countries like the Philippines, 72% of all adults do likewise. Also according to Statista, more than half of Americans say they have encountered fake news online. 

These trends are very worrying and may escalate as social media use continues to rise. One example: A recent analysis by the Real Facebook Oversight Board found that from a dataset of 195 accounts and approximately 45,000 posts, climate misinformation posts received between 818,000 and 1.36 million views. The scale of this problem comes into sharp focus when one considers that Facebook, soon to be rebranded as Meta, attracts around 2.91 billion active users per month. 

To make sure social spaces are trustworthy (free of false information) and welcoming (free of disturbing or hurtful content), moderation works best when artificial intelligence (AI), machine learning (ML), and a human hand work together to curb the spread of misinformation and hate speech.

The Right Mix of Humans and Tech Is a Must for Effective Content Mediation

AI can sift through what behavioral scientist Tarleton Gillespie calls "the immense scale of the data" and "the relentlessness of the violations." It can scan massive volumes of content to ensure no harmful information is shared and automatically delete content it deems dangerous. 

These abilities massively reduce the workload of human content moderators and the extent to which those moderators are exposed to disturbing content. It is now well-documented that the psychological fallout for human content moderators can be dangerously high if the content is in their hands only. According to researchers at Harvard University, "journalists, scholars, and analysts have noted PTSD-like symptoms and other mental health issues arising among moderators." 

This can be on the global scale of moderators handling, for example, graphic depictions of human rights abuses. Alternatively, it can be a local community group on Facebook that gets into a heated debate on racial profiling and ends up with a moderator having to decide what amounts to hate speech.

However, AI alone is not the solution to mitigate this problem. Human intervention is just as vital because only humans can monitor the accuracy of content when the violations are more subtle. The human understanding of the nuance of language and sentiment is valuable and cannot be replaced by AI or ML. 

As Finnish techno-anthropologist Minna Ruckenstein writes, "The machine has its limitations with interpreting content.  It slavishly executes removal tasks based on the training data."

Also,  some users try to evade moderation systems by tweaking the words they use. According to The Poynter Institute, for example, some social media users who are vaccine-hesitant sometimes use the deliberate misspelling 'vachscene' to share questionable content and avoid automatic flags. With AI, such content could fall through the cracks, yet a human moderator would pick up the gist of the harmful content.

Thus, since both AI and human moderators bring significant advantages to the table of content moderation, neither is perfect or infallible, and both come with risks and weaknesses. When the two are combined, however, the moderation system is optimal.

Strategies for Improving Social Media Content Moderation

Several different mechanisms can improve moderation because it is not just about the nature of the content or information disseminated in the public domain. It's also about who is posting that content and if they are who they say they are.

24/7 Content Reviews and Management

Social media never sleeps. That's why content reviews and management are round-the-clock activities. If a breach that threatens trust or security occurs, it can spread at lightning speed and can be in the form of audio, visual, or written content. In fact, malicious or dangerous content often spreads more quickly than anything else. As Sinan Aral, a professor at MIT who co-authored a study on how false news spreads far more quickly than real news on Twitter, said, "We found that falsehood diffuses significantly farther, faster, deeper, and more broadly than the truth, in all categories of information, and in many cases by an order of magnitude. "It is thus important, if you're a business owner, to make sure your customers' interests are always protected. This means the screening and approval of user-generated content, based on predefined guidelines, should also never sleep.

Fraud Prevention and Abuse Detection

A combination of human expertise and advanced automation is ideal to stop fraud in its tracks. That is the only way to protect your brand from offensive content, images, and video that may proliferate on social media. It does, however, have to be an entirely thorough job: every piece of user-generated content has to be screened or else the system can fail. Automated abuse detection rules should also be used to review any new content generated. Once potential instances of abuse have been flagged, they should be examined on a case-by-case basis. With machine learning, data sets and models can be generated so that nothing slips through the cracks.

Profile Impersonation Detection

Fake accounts, emails, and domains are just more reminders that social media lacks the regulation required to balance free speech with curbing the spread of false information. Profile impersonation detection is thus vital to any business that wants to be protected from it, and again, the combination of human capabilities and AI is optimal. Advanced analytics can dismantle obvious fakes before they reach customers, but humans may need to review more subtle violations.

Improve Your Content Moderation with the Right Human and AI Mix 

Social media is here to stay. Unfortunately, that means frauds, fakes and those who spread misinformation and malicious content aren't going anywhere. In many countries, comprehensive regulation is not going to come from some bigger authority, so companies need their own strategies for content moderation. Protecting your brand means putting mechanisms like round-the-clock reviews, and fraud-, abuse- and profile impersonation-detection in place.

The good news is that you can outsource content moderation to a reliable company that brings human expertise and AI together so that you don't have the mammoth task and expense of trying to set it up in-house. To learn how to establish a social media content strategy for your company, contact the team at Helpware.

Related Posts

AI Maturity Model: Is Your Company Ready for AI Integration?

Is business-oriented AI truly a game-changer for Small and Medium-sized Enterprises (SMEs)? How big is its impact today? How relevant will it be in 5-10 years? Is my company ...

The Ultimate Guide to Customer Service Automation

Understanding customers' needs is the main aim of customer service automation. Modern businesses are on the lookout for new methods that will make their customer support more ...

Artificial Intelligence in the Call Center Industry

Always quick to adopt new technologies, the call center industry rapidly accepted changes by using call center artificial intelligence solutions in their CS processes. This ...
img-name
Nick Mannella
Chief Revenue Officer

Helpware expertise

Core Services

Explore Helpware

Let’s chat about business process outsourcing for success

Let’s Get Started
Helpware-Anton-2