- Experts question gaps in SL’s capacity to monitor, identify, and act to prevent occurrence locally
- Warn MO can be replicated in Sri Lanka, call for improving media and hate speech literacy
The currently unfolding investigation into a young Sri Lankan content creator named Geeth Sooriyapura based in the UK has exposed a troubling and technologically sophisticated ecosystem of monetising Artificial Intelligence (AI)-driven misinformation, rage-bait content, and political manipulation that spans both countries.
What once appeared to be isolated online activity has now revealed a far deeper web – one that blends digital opportunism with the potential to inflame long-standing social and ethnic tensions.
At the centre of the controversy is the use of generative AI to create inflammatory posts, videos, and narratives – particularly anti-immigrant and Islamophobic content.
Media and Entertainment Lawyer Chanakya Jayadeva explained that public misconceptions about AI autonomy often obscured the true source of responsibility.
“AI, specifically generative AI, works only once you give the command. These bots – if they are told to do something – are doing so because the person told them to. They do not act on their own. Therefore this individual is liable for creating a technical device and ultimately, for creating hatred – Islamophobic and anti-immigrant hatred,” he noted.
Liability under Sri Lankan law
Jayadeva stated that although the technology may appear complex, liability under Sri Lankan law was clear: malicious instructions to an AI system do not absolve the human operator.
“If you look at the International Covenant on Civil and Political Rights (ICCPR) Act, there are several grounds on which you can be prosecuted for spreading religious or racial hatred, creating disharmony, specifically religious or racial disharmony. But you have to understand the degree, as well as the probable cause or probability, of creating racial and religious disharmony.”
Sri Lanka’s ICCPR Act, long criticised for inconsistent applications, provides a robust mechanism to prosecute individuals spreading content that could incite religious or racial hatred. Jayadeva emphasised that Section 3(1) directly targeted such behaviour: “No person shall propagate war or advocate national, racial, or religious hatred that constitutes incitement to discrimination, hostility, or violence.”
Prosecution is possible even when the offender resides abroad: “You can prosecute but can’t extradite if we don’t have an agreement. However, you can charge in absentia – like what happened with former Prime Minister of Bangladesh Sheikh Hasina. If he ever comes here, he can get prosecuted,” Jayadeva told The Sunday Morning.
Other provisions in the ICCPR Act – including prohibitions on false statements meant to wound religious feelings, fraudulent inducement, and impersonation – could also apply.
Sri Lanka’s newer Online Safety Act (OSA) expands this legal net even further.
Section 12 prohibits false online statements that threaten public order, national security, or promote hostility between groups, while Section 19 criminalises circulating false reports meant to cause fear, mutiny, or public alarm. Section 20 targets the malicious use of bots and automated systems.
“The provision in the OSA also can be used. It refers to false statements and is primarily found in Part III, Section 12 of the act, titled ‘Prohibition of Online Communication of False Statements.’ This section applies to any person, whether inside or outside Sri Lanka, who communicates false statements that pose threats to national security, public health, public order, or that promote feelings of ill-will and hostility between different classes of people. Even if it doesn’t create disharmony but hurts a different community through humiliation, for example, the OSA can still be used to prosecute through Section 19.”
The Computer Crime Act adds a further layer, especially where unauthorised digital manipulation or actions affecting national security are involved.
Monetisation of rage-bait content
Cybersecurity expert Asela Waidyalankara says that the heart of the issue is not ideology alone: it is the monetisation of rage-bait content.
“He was teaching people how to create what we call rage-bait content. For example: ‘Muslims are taking over London.’ This is not factual, but it goes viral. Virality triggers Facebook’s algorithm to pay for engagement.”
Waidyalankara noted that although direct Facebook monetisation in Sri Lanka was limited, those with large followings could earn indirectly by pushing political and ideological narratives. Some operate in a ‘grey market’ paid by clients eager to leverage their influence.
“In Sri Lanka’s context, monetisation through platforms like YouTube is known, but direct monetisation on Facebook is less clear. However, page administrators with large followings can be paid to promote specific political or ideological narratives, operating in a grey market outside traditional monetisation methods. Such individuals have considerable influence due to their captive audiences,” Waidyalankara said.
Sooriyapura’s model – teach others to produce hate, boost engagement, and profit from the algorithm – creates a self-reinforcing cycle.
“He’s teaching people how to do this. Basically, he’s teaching people how to game the algorithm to make content based on the news or whatever, to create very contentious, rage-bait content. People engage with that content, and when you engage with it, you get paid.”
Detecting coordinated influence operations remains a major challenge in Sri Lanka.
“Detection capabilities are limited,” Waidyalankara said. “We need specialised interdisciplinary units. Other countries have them; we don’t.”
AI-generated content – deepfakes, synthetic voices, fabricated screenshots – makes the challenge exponentially harder.
Yet, Waidyalankara insisted that Sri Lanka did have the foundational talent and tools, if agencies worked together.
“It requires bringing the right experts together to critically analyse content with the appropriate technological support. With these efforts, it is possible to detect and mitigate these threats promptly, helping to safeguard social cohesion and political stability.”
Danger of emulation
Media analyst Nalaka Gunawardana warned that what happened in the UK could easily be replicated in Sri Lanka – and the social consequences could be severe.
“There is a danger of his actions being emulated here. Others who haven’t previously considered it might also try it. We can see a spiraling of hateful content.”
Gunawardana said that hate speech on Sri Lankan social media platforms was already a persistent issue, especially in Sinhala and Tamil, where content moderated using AI struggled with nuance, metaphor, and coded language.
He outlined a three-layer moderation system involving automated review, human review, and community reporting, but none of these are fast or comprehensive enough. Viral hate can spread widely before removal.
“Extreme content is taken down either automatically or by human monitors, but that system is not foolproof. Some hate speech does get through. The next layer is when moderate users see it and report the content in a reactive way. The platform investigates and takes it down, but that takes some time. The first layer is proactive on the platform’s part. They have both AI and human monitoring ongoing 24/7, 365 days a year.
“Hate speech is not allowed by community standards. If it gets posted, it is taken down. But in our languages, some people have learnt to rig that system and get around it by using innuendo and metaphors instead of the actual hateful words. Therefore, the next layer is users – moderate users – seeing and reporting it, followed by an investigation. Some content may be taken down that way, but this process takes a few days. And a few days is a long time online, during which a lot of damage and harm can happen. Those are the pitfalls.”
The solution, he argued, was not more policing alone: “The scale is too big. We ultimately need digital literacy and digital citizenship – skills and ethics.”
Stark limitations
From the standpoint of Sri Lanka Computer Emergency Readiness Team (SLCERT), the country’s main technical cyber response agency, there are stark limitations.
As SLCERT Lead Information Security Engineer Charuka Damunupola explained: “Not even the Government can control or monitor all content. We can’t take action unless a violation is reported. Platforms themselves can’t fully control content either.”
He noted that the only absolute control models existed in China or North Korea, where platforms were banned or replaced with Government-run alternatives – a route incompatible with Sri Lanka’s human rights obligations.
Sri Lankan authorities do have some tools. The Criminal Investigation Department (CID) Cyber Crime Unit has direct channels to request content removal on Facebook and TikTok.
State Intelligence Service (SIS) monitors national security-related threats. SLCERT can escalate complaints, but cannot enforce action independently. Still, the volume of harmful content far exceeds monitoring capacity.
“There’s no individual-focused attention,” Damunupola said. “We only act when someone reports.”
Global stances
Countries like Denmark and Australia are moving ahead with stringent digital safety frameworks that Sri Lanka could learn from.
Denmark has banned social media use for children under 15, citing the need to protect vulnerable groups from harmful and manipulative content.
Australia has tightened regulations compelling tech platforms to act swiftly against misinformation, political disinformation, and hate speech.
Meanwhile, China’s new social media law is a unique and novel safeguard against the spread of false information for the purpose of monetisation.
Implemented in late October, it mandates that influencers addressing critical subjects such as finance, law, or health must possess relevant professional qualifications, like degrees or certifications. This measure aims to curb misinformation and guarantee the accuracy of advice by verifying influencer credentials.
Additionally, the regulations prohibit covert advertising of medical products and require clear disclosure if content is generated by AI.
The Sooriyapura case underscores how modern political manipulation crosses borders effortlessly, powered by AI and platform economics. What starts as online ‘content creation’ can quickly mutate into coordinated misinformation operations capable of destabilising societies. Sri Lanka, with its history of ethnic tensions, remains particularly vulnerable. Experts agree on one point: while laws and agencies matter, public digital literacy, ethical online behaviour, and societal resilience are the ultimate defence.