Sri Lanka’s digital economy currently accounts for approximately 3–4% of GDP and is expected to grow to 12% to meet its $ 15 billion target, with the Artificial Intelligence (AI) sector projected to contribute $ 1.5–1.8 billion.
While digital transformation is essential for the country’s digital economy goals, Sri Lanka faces the immediate challenge of managing the risks and ethical concerns associated with these new technologies.
In an interview with The Sunday Morning Business, LIRNEasia Data, Algorithms, and Policy (DAP) Team Lead and Research Manager Merl Chandana highlighted the need for the implementation of AI governance as a priority going forward. He also discussed the significance of adopting a phased soft law approach to AI governance.
Following are excerpts:
Why is AI governance something Sri Lanka needs to worry about now, and not later?
The perception that Sri Lanka has not yet widely adopted AI is inaccurate. While it may not be as visible as in other countries in the form of big AI-focused companies, AI is already getting integrated into various sectors like health, retail, transport, finance, and e-commerce.
The potential benefits and risks associated with this technology are already present. Viewing the risks of AI as a distant concern does not reflect our current reality. We must determine how to manage this technology responsibly today.
Preparing for the country’s digital future is equally important. There is significant movement regarding digital public infrastructure and the introduction of new digital services. This generates vast amounts of data, and when large-scale data handling meets AI, the need for oversight becomes immediate.
There is also growing pressure within both the public and private sectors to adopt AI. If these implementations are handled in a haphazard manner, they can create significant issues. Managing a technology this powerful requires a serious and immediate conversation about governance.
Sri Lanka recently published a draft National AI Strategy. What is the core philosophy guiding AI governance in that strategy and where are we in terms of the next steps?
While an updated version is yet to be released by the incumbent Government, it is unlikely that we will see a fundamental shift in the country’s priorities or its general approach to AI governance. However, the strategy likely needs updates to reflect the current state of the technology, as AI moves at a fast pace.
I expect an updated strategy to become official Government policy in the near future. Once it becomes an official Government strategy rather than a draft, the State can begin implementing specific actions, such as reallocating budgets, forming implementation bodies, and setting governance priorities.
From the beginning, our strategy has advocated for a phased, soft law approach to AI, much like many countries that share similar development priorities and state capacity. Rather than introducing a massive, comprehensive regulation immediately, this method leverages existing legal instruments to manage risks that arise due to AI. This is a pragmatic choice, as technology evolves rapidly and often outpaces our ability to update and effectively implement targeted AI laws.
The European Union’s (EU) experience with the AI Act shows how difficult it is to finalise comprehensive regulation; they had to revise their plans significantly after the rise of generative AI and continue to face implementation challenges.
Sri Lanka currently lacks that level of regulatory capacity, so a phased approach is more realistic. We can use existing tools like the Personal Data Protection Act (PDPA) and the Right to Information Act, interpret them broadly, and provide further clarifications to cover AI-related issues. Thus, a phased approach is appropriate in the current context.
Does Sri Lanka operate in a legal vacuum when it comes to AI? Do we have legal instruments to regulate AI?
We are not starting from zero. The Constitution serves as a primary framework, specifically through protections that guarantee equality and protect citizen rights. These fundamental principles are frequently used in court cases worldwide involving AI.
Beyond the Constitution, the PDPA is a significant tool since many AI-related harms involve the use of personal data. Clause 18 of the act already addresses the solely automated processing of personal data and there is room to expand or clarify these protections specifically for AI.
Several existing laws can be interpreted or adapted to protect citizens from harms arising from the use of AI. The Right to Information Act provides a framework for transparency, which can be interpreted or amended to cover disclosures when the Government uses AI in public services.
Other relevant laws include the Consumer Affairs Authority (CAA) Act, which can address discriminatory pricing or faulty AI products, and the Electronic Transactions Act for digital commerce. The Computer Crime Act covers cybercrimes, and the pending Cybersecurity Bill will add another layer of protection.
The priority should be to strengthen these existing regulations and build the capacity of the authorities responsible for enforcing them. By applying these laws to AI, we will identify where the actual gaps are. This structure provides a strong starting point for a soft law approach.
How does the PDPA actually help with AI?
Sri Lanka’s PDPA was not designed as an AI law, but it already provides important safeguards for how AI systems that rely on personal data are developed and used.
At its core, the act limits how personal data can be collected, reused, and repurposed, and requires that data be accurate, complete, and up to date. These requirements directly address common sources of AI harm, such as biased training data, outdated records, and the secondary use of data for profiling or automated scoring without proper justification.
Crucially, the PDPA also regulates decisions made solely through automated processing. Where decisions are based solely on automated processing and have significant effects, the PDPA allows individuals to request a review of those decisions and to appeal refusals to the Data Protection Authority, providing a mechanism for oversight and redress rather than full algorithmic disclosure.
This makes the use of AI in areas like credit, welfare, recruitment, or risk assessment a matter of legal accountability. While the PDPA does not regulate AI models themselves, it anchors AI governance in enforceable rights and obligations around data, fairness, transparency, and responsibility.
Beyond data, what about consumer rights? If AI sells me a bad product or discriminates in pricing, who is responsible?
We need a consumer affairs authority for the digital age. The CAA Act was drafted for traditional goods and services and does not explicitly address algorithmic pricing or digital platforms.
Rather than creating new legislation, the more pragmatic approach is for the CAA to interpret ‘unfair trade practices’ to include opaque or unjustified algorithmic discrimination. If an algorithm charges consumers differently based on location or inferred vulnerability without transparency or recourse, that could fall squarely within existing consumer protection powers.
Some may argue that strong regulation stifles innovation. What is your opinion on this? How does the soft law approach manage this balance?
The idea that regulation inherently stops innovation is a common argument, but it is not entirely accurate. While poor regulation can be a hindrance, sensible regulation is actually necessary for sustained innovation. Consumer trust is a primary driver of business success and that trust is built on the knowledge that safeguards exist if something goes wrong.
Regulation becomes problematic when it involves disproportionately high compliance costs, inconsistency, or unnecessary burdens on small businesses. If the rules are confusing or require expensive audits for low-risk applications, innovation suffers. However, a proportionate response that targets real harms provides the stability a market needs to grow.
A soft law approach is especially helpful here because it acknowledges that we do not have all the answers in a fast-moving field. By relying on principles and existing instruments rather than rigid, tech-specific rules, we avoid imposing hurdles that might be obsolete by the time they are enacted. This approach builds capacity and trust over time without slowing down the adoption of beneficial technology.
If we are to be specific about concerns – starting from fairness – how can the existing legal framework and soft law principles mitigate risks like algorithmic bias and discrimination?
The legal foundation for fairness is already present in the Constitution of Sri Lanka, which guarantees equality regardless of background. Also, the PDPA requires that data be accurate, complete, and up to date before it is used, which serves as a safeguard against biased models that lead to discriminatory outcomes. While this is not the only solution, it is a significant starting point.
A complementary soft law approach can further strengthen these protections through non-binding guidance, standards, and recommendations. For instance, where concerns arise around the use of AI in healthcare, the country’s AI authority can collaborate with sector experts to develop context-specific guidelines on data use, model design, and oversight mechanisms to reduce the risk of discriminatory outcomes. This approach enables timely risk mitigation without the rigidity or delay associated with formal legislation.
Transparency is essential for public trust. What specific tools does a soft law approach propose to improve transparency in AI use, especially by Government agencies?
There is a widespread belief that AI systems are ‘black boxes’ that no one understands. This is partly overstated, but more importantly, complete technical understanding is not always necessary for transparency. What matters is whether people can understand when AI is being used, why it is used, and how it affects them.
Transparency operates at two key levels, especially in government. First, there is system-level transparency: citizens should know where and for what purposes the government is using AI. This is essential for democratic accountability and oversight. Public AI use-case registries, already adopted in countries like the UK, are one way to achieve this. These registries document what systems are in use, the types of data involved, and which groups may be affected.
Second, there is decision-level transparency. When an AI system influences an individual decision, such as eligibility for credit, welfare, or services, people should have clear information about what data was used, the factors that mattered, and how the decision can be questioned or appealed. Transparency here is less about revealing source code and more about ensuring meaningful explanations and accessible grievance mechanisms.
Soft law also supports independent audits of AI systems. Publishing audit summaries can reassure the public that systems have been assessed for fairness, accuracy, and potential harm, without exposing sensitive technical details.
Moving beyond the legal texts, how should Sri Lanka structure the institutional oversight needed for this soft law framework without creating unnecessary bureaucratic burden?
Moving beyond legal texts, institutional oversight should prioritise using existing structures, rather than creating new bodies that require time, budgets, and scarce expertise.
One option is to leverage existing regulatory institutions with relevant mandates and technical competence, particularly those already dealing with data, digital systems, and compliance. Several countries have adopted this approach by incrementally expanding the scope of such regulators to address algorithmic risks. While this can be effective, it also requires careful calibration to avoid overloading institutions that are still consolidating their core functions.
A second option is to establish a non-statutory, multi-stakeholder advisory body, such as an AI governance council. Such a body can help set ethical direction, issue guidance, and coordinate across government and industry without immediately relying on legislation. In Sri Lanka’s case, this could be housed within an existing digital-lead institution, such as GovTech or a relevant ministry, minimising additional bureaucratic burden.
Both approaches reflect a pragmatic, capacity-aware path to institutional oversight: building on what already exists, avoiding unnecessary duplication, and allowing governance mechanisms to evolve as experience and expertise grow.
Finally, regardless of which oversight model is chosen, capacity building is unavoidable. If Sri Lanka relies on existing laws and soft law instruments, judges, lawyers, regulators, and frontline officials will need targeted training to interpret and apply these frameworks to AI-driven systems. This is a practical, near-term investment that will be necessary under any governance approach.
If an EU-style comprehensive law is not what is ideal for Sri Lanka, are there other regulatory models we could learn from in implementing the soft law approach that is being proposed? If yes, what makes those models specifically relevant for Sri Lanka?
While it is useful to look at international examples, regulatory models cannot be transplanted wholesale. State capacity matters.
India offers a relevant comparison. Although Sri Lanka and India differ in scale, both operate in resource-constrained environments and must balance innovation with governance. At the same time, India benefits from a relatively high-capacity central State, with strong technical and policy talent at the highest levels, which has enabled it to design and coordinate large-scale digital initiatives effectively.
India initially explored the idea of a dedicated AI law but has since shifted towards a softer regulatory model centred on governance guidelines aligned with global norms. This approach is supported by strong foundational legislation, such as data protection, an AI Safety Institute to coordinate activity collaboration with sectoral and industry bodies to articulate their own ethical principles, and guidelines for responsible AI development.
Singapore offers another interesting model. Its data protection agency has significant expertise and has developed a model AI governance framework along with an implementation guide for the private sector. It has even launched a State-supported open-source technical testing framework called AI Verify so companies can test their models for compliance against global standards.
Singapore also utilises regulatory sandboxes, which are controlled environments where companies, universities, and researchers can test new technologies under Government supervision. This allows for innovation without the immediate fear of legal violations while giving the Government a chance to learn about potential risks.
Sri Lanka can learn from these elements, but we should prioritise what we can realistically manage at the start.
In addition to the immediate next steps, if you had the capacity to change things, what would you do?
The first priority should be releasing an official national strategy as soon as possible. Relying on drafts makes it difficult to show that the Government is serious about AI.
Following that, we need a coordination committee. For it to be effective, members must have sufficient time and mandate to focus on this work. Too often, committees rely on volunteers juggling multiple responsibilities, which limits meaningful progress. What is needed instead are individuals with the right skills, clear accountability, and a genuine stake in the outcome.
Finally, the Government should define a set of clear principles for AI ethics and governance. This does not need to be a long document or conceived from scratch, but it should articulate the direction the country is taking. Releasing the strategy, forming a dedicated coordination body, and setting out ethical standards are the three primary actions that would move the agenda forward.