In today’s digital age, billions of content are uploaded to online platforms and websites every day.
Moderation of this material has therefore never been more critical and challenging. While most of this uploaded content may be positive, we are also seeing a growing amount of harmful and illegal material — from violence and self-harm to extremist rhetoric, sexually explicit images and child sexual abuse material (CSAM).
Dealing with this flood of harmful content is now a crucial challenge for companies. Anyone who is unable (or unwilling) to do so faces significant penalties and puts children at great risk.
This is what our own research revealed Over a third (38%) of parents have been approached by their children after seeing harmful or illegal contentMany access materials as graphic and damaging as CSAM within just ten minutes of going online.
Therefore, it is time for stronger content moderation efforts and for companies to move beyond traditional manual moderation methods that have become impractical and unscalable. Instead, they should leverage the complementary capabilities of AI that are changing the landscape of content moderation through automation, improved accuracy and scalability.
However, as with any new innovation, companies interested in using AI should ensure they implement the technology in a way that ensures regulatory compliance. The decisions companies make today will have a massive impact on their future operations.
The helping hand of AI
AI has drastically changed the content moderation landscape by leveraging automated scanning of images, pre-recorded videos, live streams, and other types of content on the fly. It can identify issues such as underage activity in adult entertainment, nudity, sexual activity, extreme violence, self-harm and hate symbols on user-generated content platforms, including social media.
The AI is trained on large amounts of “ground truth data,” collecting and analyzing insights from archives of tagged images and videos ranging from weapons to explicit content. The accuracy and effectiveness of AI systems are directly related to the quality and quantity of this data. Once trained, the AI can effectively detect various forms of harmful content. This is particularly important in live streaming scenarios where content moderation needs to be possible across different platforms with different legal and community standards.
An automated approach not only speeds up the moderation process, but also ensures scalability — a crucial feature in an age where purely human moderation would be impossible given the sheer volume of content online.
A synergy of AI and people
AI automation brings significant benefits as it allows companies to moderate at scale and reduce costs by not requiring a large team of moderators. However, even the most advanced technology requires human judgment, and AI is far from perfect on its own. Specific nuances and contextual clues can confuse systems and lead to inaccurate results. For example, AI may not be able to distinguish between a kitchen knife used in a cooking video and a gun used in a violent act, or mistake a toy gun in a children’s commercial for a real firearm.
So when AI flags content as potentially harmful or violating policies, human moderators can intervene to review the content and make the final decision. This hybrid approach ensures that while AI expands the scope of content moderation and streamlines the process, humans retain ultimate authority, especially in complex cases.
In the coming years, the complexity of AI identification and verification techniques will continue to increase. This includes improving the accuracy of matching people appearing in different types of content to their identification documents — a next step in ensuring consent and curbing the unauthorized distribution of content.
Thanks to its ability to learn, AI’s accuracy and efficiency are constantly improving, with the potential to reduce the need for human intervention as it continues to develop. However, the human element will still be necessary, particularly in appeals and dispute resolution related to content moderation decisions. Not only do current AI technologies lack a nuanced perspective and understanding, but humans can also serve as a check against possible algorithmic biases or errors.
The global AI regulatory landscape
As AI continues to grow and evolve, many companies will turn to regulators to outline their plans to regulate AI applications. The European Union is at the forefront of this legislation, with its Artificial Intelligence Law coming into force in August 2024. Considered a game-changer in the regulatory space, the law categorizes AI systems into three types: those that pose unacceptable risk, those deemed high-risk, and a third category with minimal regulations.
As a result, an AI Office was established to oversee the implementation of the law, consisting of five units: Regulation and Compliance; Security; AI innovation and policy coordination; Robotics and AI for the benefit of society; and excellence in AI. The office will also monitor the timelines for certain companies to comply with the new regulations, which range from six months for banned AI systems to 36 months for high-risk AI systems.
Companies in the EU are therefore advised to closely monitor legislative developments to assess the impact on their business activities and ensure that their AI systems are compliant within the set deadlines. It is also crucial for companies outside the EU to stay informed about how such regulations could affect their activities, as the legislation is expected to influence policy not only within the EU, but potentially also in the United Kingdom Kingdom, the USA and other regions. The UK and US based AI regulations will follow suit. Therefore, companies need to ensure that they have their finger on the pulse and that any tools they implement now are likely to meet the compliance guidelines of these countries in the future.
A collaborative approach to a safer internet
However, successfully implementing AI in content moderation also requires a strong commitment to continuous improvement. It is likely that tools will be developed before any regulations come into effect. Therefore, it is important that companies proactively review them to avoid possible bias, ensure fairness and protect user privacy. Organizations must also invest in ongoing training of human moderators to effectively handle the nuanced cases flagged for review by AI.
At the same time, given the psychologically demanding nature of content moderation work, solution providers must prioritize the mental health of their human moderators and provide them with robust psychological support, wellness resources, and strategies to limit prolonged exposure to disruptive content.
By taking a proactive and responsible approach to AI-powered content moderation, online platforms can create a digital environment that fosters creativity, connection and constructive dialogue while protecting users from harm.
Ultimately, AI-powered content moderation solutions provide companies with a comprehensive toolkit to address the challenges of the digital age. By monitoring and filtering massive amounts of user-generated content in real-time, this cutting-edge technology helps platforms maintain a safe and compliant online environment and enables them to efficiently scale their moderation efforts.
However, when using AI, companies should keep a close eye on important documents, implementation dates and the impact of upcoming legislation.
If implemented effectively, AI can act as a perfect partner for humans, creating a content moderation solution that protects children while accessing the internet and acts as a cornerstone for creating a safe online ecosystem.
Lina Ghazal
Head of Regulatory and Public Affairs at VerifyMy, specializing in online ethics, regulation and security. He previously worked at Meta (formerly Facebook) and Ofcom.

