AI, a helping hand for companies in content moderation

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email

In today’s digital age, billions of content are uploaded to online platforms and websites every day.

Moder­ation of this material has therefore never been more critical and challenging. While most of this uploaded content may be positive, we are also seeing a growing amount of harmful and illegal material — from violence and self-harm to extremist rhetoric, sexually explicit images and child sexual abuse material (CSAM).

Dealing with this flood of harmful content is now a crucial challenge for companies. Anyone who is unable (or unwilling) to do so faces signif­icant penalties and puts children at great risk.

This is what our own research revealed Over a third (38%) of parents have been approached by their children after seeing harmful or illegal contentMany access materials as graphic and damaging as CSAM within just ten minutes of going online.

Therefore, it is time for stronger content moder­ation efforts and for companies to move beyond tradi­tional manual moder­ation methods that have become imprac­tical and unscalable. Instead, they should leverage the comple­mentary capabil­ities of AI that are changing the landscape of content moder­ation through automation, improved accuracy and scala­bility.

However, as with any new innovation, companies inter­ested in using AI should ensure they implement the technology in a way that ensures regulatory compliance. The decisions companies make today will have a massive impact on their future opera­tions.

The helping hand of AI

AI has drasti­cally changed the content moder­ation landscape by lever­aging automated scanning of images, pre-recorded videos, live streams, and other types of content on the fly. It can identify issues such as underage activity in adult enter­tainment, nudity, sexual activity, extreme violence, self-harm and hate symbols on user-generated content platforms, including social media.

The AI ​​is trained on large amounts of “ground truth data,” collecting and analyzing insights from archives of tagged images and videos ranging from weapons to explicit content. The accuracy and effec­tiveness of AI systems are directly related to the quality and quantity of this data. Once trained, the AI ​​can effec­tively detect various forms of harmful content. This is partic­u­larly important in live streaming scenarios where content moder­ation needs to be possible across different platforms with different legal and community standards.

An automated approach not only speeds up the moder­ation process, but also ensures scala­bility — a crucial feature in an age where purely human moder­ation would be impos­sible given the sheer volume of content online.

A synergy of AI and people

AI automation brings signif­icant benefits as it allows companies to moderate at scale and reduce costs by not requiring a large team of moder­ators. However, even the most advanced technology requires human judgment, and AI is far from perfect on its own. Specific nuances and contextual clues can confuse systems and lead to inaccurate results. For example, AI may not be able to distin­guish between a kitchen knife used in a cooking video and a gun used in a violent act, or mistake a toy gun in a children’s commercial for a real firearm.

So when AI flags content as poten­tially harmful or violating policies, human moder­ators can intervene to review the content and make the final decision. This hybrid approach ensures that while AI expands the scope of content moder­ation and stream­lines the process, humans retain ultimate authority, especially in complex cases.

In the coming years, the complexity of AI identi­fi­cation and verifi­cation techniques will continue to increase. This includes improving the accuracy of matching people appearing in different types of content to their identi­fi­cation documents — a next step in ensuring consent and curbing the unautho­rized distri­b­ution of content.

Thanks to its ability to learn, AI’s accuracy and efficiency are constantly improving, with the potential to reduce the need for human inter­vention as it continues to develop. However, the human element will still be necessary, partic­u­larly in appeals and dispute resolution related to content moder­ation decisions. Not only do current AI technologies lack a nuanced perspective and under­standing, but humans can also serve as a check against possible algorithmic biases or errors.

The global AI regulatory landscape

As AI continues to grow and evolve, many companies will turn to regulators to outline their plans to regulate AI appli­ca­tions. The European Union is at the forefront of this legis­lation, with its Artificial Intel­li­gence Law coming into force in August 2024. Considered a game-changer in the regulatory space, the law catego­rizes AI systems into three types: those that pose unacceptable risk, those deemed high-risk, and a third category with minimal regula­tions.

As a result, an AI Office was estab­lished to oversee the imple­men­tation of the law, consisting of five units: Regulation and Compliance; Security; AI innovation and policy coordi­nation; Robotics and AI for the benefit of society; and excel­lence in AI. The office will also monitor the timelines for certain companies to comply with the new regula­tions, which range from six months for banned AI systems to 36 months for high-risk AI systems.

Companies in the EU are therefore advised to closely monitor legislative devel­op­ments to assess the impact on their business activ­ities and ensure that their AI systems are compliant within the set deadlines. It is also crucial for companies outside the EU to stay informed about how such regula­tions could affect their activ­ities, as the legis­lation is expected to influence policy not only within the EU, but poten­tially also in the United Kingdom Kingdom, the USA and other regions. The UK and US based AI regula­tions will follow suit. Therefore, companies need to ensure that they have their finger on the pulse and that any tools they implement now are likely to meet the compliance guide­lines of these countries in the future.

A collaborative approach to a safer internet

However, success­fully imple­menting AI in content moder­ation also requires a strong commitment to continuous improvement. It is likely that tools will be developed before any regula­tions come into effect. Therefore, it is important that companies proac­tively review them to avoid possible bias, ensure fairness and protect user privacy. Organi­za­tions must also invest in ongoing training of human moder­ators to effec­tively handle the nuanced cases flagged for review by AI.

At the same time, given the psycho­log­i­cally demanding nature of content moder­ation work, solution providers must prior­itize the mental health of their human moder­ators and provide them with robust psycho­logical support, wellness resources, and strategies to limit prolonged exposure to disruptive content.

By taking a proactive and respon­sible approach to AI-powered content moder­ation, online platforms can create a digital environment that fosters creativity, connection and constructive dialogue while protecting users from harm.

Ultimately, AI-powered content moder­ation solutions provide companies with a compre­hensive toolkit to address the challenges of the digital age. By monitoring and filtering massive amounts of user-generated content in real-time, this cutting-edge technology helps platforms maintain a safe and compliant online environment and enables them to efficiently scale their moder­ation efforts.

However, when using AI, companies should keep a close eye on important documents, imple­men­tation dates and the impact of upcoming legis­lation.

If imple­mented effec­tively, AI can act as a perfect partner for humans, creating a content moder­ation solution that protects children while accessing the internet and acts as a corner­stone for creating a safe online ecosystem.


Lina Ghazal

Head of Regulatory and Public Affairs at VerifyMy, special­izing in online ethics, regulation and security. He previ­ously worked at Meta (formerly Facebook) and Ofcom.

Related Posts