- It may not be what you think
The label ‘artificial’ implies something fake or detached, but, artificial intelligence (AI) is anything but detached from human reality. From Siri on smartphones to ChatGenerative Pre-trained Transformer (GPT) in classrooms, AI is embedded in daily life. It is not an alien intelligence operating in isolation but a direct extension of human design, logic, and culture. Consider Google Translate: while it appears to ‘know’ dozens of languages, its outputs depend entirely on millions of human translations fed into its system.
The ‘artificial’ here is a matter of process, not essence. AI is still reproducing human activity, just in an accelerated computational form. By continuing to call it artificial, we risk creating a false narrative that exaggerates its independence from us. This misunderstanding feeds fears of AI as a competing intelligence rather than a human-anchored tool.
This framing matters because the language around AI influences how societies regulate it. For example, the European Union’s (EU) AI Act, passed last year (in 2024), deliberately avoids treating AI as independent intelligence and instead defines it as a ‘software developed with human-designed techniques.’ Such policy choices emphasise human accountability, showing that lawmakers reject the myth of autonomy.
Similarly, the United Nations Educational, Scientific and Cultural Organisation’s Recommendation on the Ethics of AI (2021) stressed that AI is always “human-made and human-directed”. By confronting the misleading use of ‘artificial’, we can better understand AI’s nature as an amplifier of human power. This is not a philosophical quibble; it shapes whether we hold governments, corporations, and developers accountable for AI’s harms.
Data: The human DNA of AI
Every AI model is only as good as the data that it consumes. Chatbots like OpenAI’s GPT or Anthropic’s Claude generate responses not because they ‘think’, but because they have been trained on vast datasets of human writing. Likewise, Spotify’s recommendation engine works because millions of users’ listening habits have been tracked and mathematically modeled.
Far from being artificial, these systems are repositories of human behaviour, transformed into predictive outputs. Data is AI’s deoxyribonucleic acid (DNA), and it is undeniably human. Without the texts, images, and actions of real people, AI systems would be empty shells.
But, this dependence also creates risks. Cambridge Analytica’s misuse of Facebook data in 2016 demonstrated how human-generated information could be exploited to manipulate democratic elections. AI systems trained on such political data could reinforce voter biases at scale.
Similarly, in Sri Lanka, where social media misinformation played a role in the 2018 anti-Muslim riots, unregulated algorithmic systems amplified hate speech. This proves that AI is not detached; it is tightly bound to human actions and their consequences. Recognising data as the human DNA of AI underscores that we are always embedded in what AI produces, whether for good or for harm.
Intelligence or just imitation?
AI often gives the illusion of intelligence, but, in reality, it performs imitation at scale. Consider the International Business Machines Corporation’s Watson, which famously beat human champions at ‘Jeopardy!’ in 2011. While Watson’s answers seemed intelligent, the system was not reasoning like a human contestant; it was matching clues to stored linguistic patterns and statistical probabilities. Similarly, large language models like GPT-Four or GPT-Five can draft essays, but they do not understand law, ethics, or human suffering. They produce coherent text by predicting the next likely word, not by reasoning with awareness. This raises critical issues about overestimating AI’s capacities.
The risks of conflating imitation with intelligence are real. In 2023, a New York, United States (US) lawyer was sanctioned after submitting a legal brief written with ChatGPT that contained entirely fabricated case law. The AI had not lied with intent; it had statistically imitated the style of legal citations without distinguishing between real and fake precedents.
Likewise, Microsoft’s Tay chatbot in 2016 began producing racist tweets within 24 hours, not because it ‘believed’ in racism, but because it imitated toxic language fed to it by users. These examples show that AI does not possess true intelligence; it simulates human expression. Recognising this prevents blind trust in systems that cannot reason or be held accountable.
The problem of bias
AI does not eliminate bias; it often reproduces and amplifies it. A well-known example is the COMPAS algorithm used in the US to predict recidivism rates. Investigations by ProPublica in 2016 showed that COMPAS disproportionately labeled black defendants as high-risk, while underestimating the risk for white defendants. This was not because the algorithm was malicious, but because it was trained on historically biased criminal justice data.
The same issue has been found in hiring algorithms, such as Amazon’s experimental recruitment AI, which was abandoned in 2018 after it was discovered to downgrade applications from women.
Bias is not only a Western problem. In India, the Aadhaar-linked biometric systems have sometimes failed to recognise rural workers, disproportionately excluding the poor from welfare benefits. In Sri Lanka, proposals to use facial recognition for public security raise similar concerns, especially since datasets often underrepresent darker-skinned populations.
These failures show that AI is not neutral but deeply social; it reflects the prejudices embedded in human data. The danger lies in treating AI decisions as objective when they are anything but. Recognising bias as structural, not incidental, is essential if AI is to be deployed ethically.
Creativity and originality
AI art generators such as DALL·E and MidJourney have sparked debates about whether AI can be considered creative. A painting generated in the style of Pablo Diego José Francisco de Paula Juan Nepomuceno María de los Remedios Cipriano de la Santísima Trinidad Ruiz y Picasso may look original, but it is based on statistical re-combinations of existing works.
The same goes for AI-generated music that mimics Wolfgang Amadeus Mozart or AI-written poems that resemble William Shakespeare. Unlike humans, AI does not create with intention or emotion; it produces plausible outputs based on probability. Creativity here is imitation at scale.
Yet, these tools are reshaping industries. In 2023, an AI-generated image won first place in a Colorado State, US art competition, raising controversy over fairness in human-versus-machine creativity. In law, debates have emerged around copyright ownership: should an AI-generated work be credited to the programmer, the user, or no one at all? The US Copyright Office (2023) ruled that AI-generated images without human input are not eligible for copyright, stressing the centrality of human authorship. This shows that originality is not absolute but context-dependent.
AI challenges our definitions of creativity and forces us to rethink how society values human versus machine-made art.
Human dependence and control
Despite myths of autonomy, AI is profoundly dependent on humans. Self-driving cars, such as those tested by Tesla and Waymo, require constant human supervision, software updates, and retraining to adapt to new traffic conditions. When a Tesla autopilot caused a fatal crash in California, US, in 2018, investigators concluded that human oversight and better safety systems could have prevented the accident. These examples show that AI is not autonomous but deeply reliant on human intervention.
Even generative AI models degrade over time if not updated. ‘Model drift’, where AI becomes less accurate as the world changes, requires constant retraining with new data. Moreover, human regulators must set ethical limits. The EU AI Act, for instance, bans ‘social scoring systems’ like those used in China, recognising the dangers of unchecked AI surveillance. In Sri Lanka, the absence of a national AI regulatory framework leaves citizens vulnerable to misuse, particularly in elections and public security. These realities demonstrate that AI is never self-contained. Its trajectory depends entirely on how humans design, maintain, and regulate it.
Rethinking what AI really is
Ultimately, AI is far less ‘artificial’ than its name suggests. It is grounded in human data, shaped by human priorities, and constrained by human oversight. Its failures, bias, drift, and imitation are human failures reflected back at scale. At the same time, its strengths, efficiency, speed, and scalability are human achievements extended through computation. AI is a mirror of society, not an alien intelligence.
The pressing issue is whether societies will shape AI responsibly or allow it to magnify inequality and injustice. The EU has begun to set global standards with its AI Act, but many developing countries including Sri Lanka have no comprehensive frameworks. Without regulation, AI could entrench existing divides, with global North corporations controlling the technology while the global South remains a consumer. Reframing AI as a socio-technical system rather than a separate intelligence compels us to take responsibility. The future of AI will not be determined by machines but by human choices in governance, ethics, and law.
(The writer is an attorney and Senior Law Lecturer at the Colombo University)
……………………………………….
The views and opinions expressed in this article are those of the author, and do not necessarily reflect those of this publication