- The fear of the machine and the myth of theft
When generative artificial intelligence (AI) entered lecture halls and student laptops, alarm bells rang across universities. Faculty meetings became forums for moral panic, and academic senates debated whether tools like ChatGenerative Pre-Trained Transformer (GPT), Gemini, and Claude marked the end of authentic learning. Many feared that these systems had ushered in an age of intellectual theft – a silent plagiarism machine capable of stealing human thought and dismantling centuries of scholarly integrity. This framing however, rests on a misunderstanding of both technology and authorship. It presumes that AI ‘thinks’ and ‘creates’ in the same way that humans do, when in reality, it is a sophisticated linguistic predictor and algorithm trained to reorganise and generate text through probability, instruction, and pattern recognition.
Generative AI does not steal ideas in the conventional sense; it reorganises information drawn from human-produced data, much like how a researcher synthesises existing literature into new insights. It is a tool, not an agent, and its actions are entirely bound by the intent and design of its user. Just as a calculator does not ‘steal’ the process of arithmetic, AI does not ‘steal’ intellectual creation, it merely amplifies the efficiency of expression. The moral anxiety surrounding AI thus stems not from the reality of intellectual theft but from a lack of conceptual clarity about how meaning and originality are produced. The real ethical question therefore is not whether AI steals but how humans choose to use it ethically, transparently, and within the frameworks of academic integrity.
Instructions define creation: Human agency behind AI outputs
Every generative AI output begins with a human prompt. A student types a query, an academic sets a parameter, or a writer provides an outline, the machine merely responds to these instructions. The originality of the output depends on the human input, the framing of the question, the precision of the context, and the intellectual depth of the prompt. The AI itself does not possess intention or consciousness; it operates by statistically predicting linguistic sequences. Meaning is created only when a human interprets and situates the output within a purposeful argument or narrative.
In this light, the ‘intellectual labour’ of using AI lies not in typing text but in designing thought. The human creator moves from the role of the writer to that of architect, constructing conceptual frameworks, prompting critically, and evaluating the quality of AI-assisted drafts. A vague or poorly structured prompt produces a generic, uninspired response; a carefully formulated prompt can yield an intricate and original analytical structure. The distinction between theft and authorship therefore rests in the human’s cognitive effort. By designing the parameters that guide the machine, the scholar maintains authorship and agency.
This shift also redefines creativity itself. Where once creativity was measured by manual articulation, it is now measured by intellectual orchestration. Students are required to think meta-critically to anticipate the logic of an algorithm, to filter biases, and to validate sources. Far from diminishing academic rigour, generative AI heightens it, it demands awareness of process, intentionality, and the epistemic limits of one’s tools.
The context of the social sciences: Where answers are never final
Unlike disciplines rooted in formulaic precision, the social sciences and humanities thrive on interpretation, argument, and context. In law, philosophy, political science, or sociology, knowledge is never final, it evolves through dialogue and debate. There is no single ‘correct’ answer to a question about justice, freedom, or power; there are only interpretations, perspectives, and competing rationalities. Within this epistemic landscape, the notion of ‘copying an answer’ loses its coherence. What matters is the reasoning process, not the retrieval of information.
When used responsibly, generative AI becomes a facilitator of this reasoning process. It can simulate opposing arguments, generate alternative framings, or assist in identifying theoretical blind spots. For instance, a student researching the social contract theory might ask AI to juxtapose Hobbesian (philosopher Thomas Hobbes) and Rousseauian (philosopher Jean-Jacques Rousseau) visions of authority; the resulting synthesis offers a mirror for critique rather than a substitute for understanding. The student still performs the interpretive labour of validation, citation, and argumentation.
Furthermore, AI cannot intuit social nuance, moral judgment, or cultural empathy-related dimensions central to social-scientific inquiry. Its limitations reinforce the necessity of human interpretation. The machine may summarise philosophers Paul-Michel Foucault or Jurgen Habermas, but, only the scholar can discern their relevance to contemporary issues of surveillance or democratic decay. Thus, within disciplines built upon interpretive reasoning, AI is not an agent of intellectual theft but an amplifier of comparative reflection. It transforms the learning process into an interactive dialogue between human reasoning and computational suggestion.
Redefining authorship: Collaboration, not substitution
The meaning of authorship has never been static. With every technological leap, the invention of the printing press, the rise of the typewriter, the word processor, and now generative AI, humanity has redefined what it means to create. Each of these shifts has provoked similar anxieties: fears that mechanisation would erode originality or human intellect. Yet, history demonstrates that technology has consistently expanded, not diminished, creative agency. AI represents the next chapter in this continuum: a tool of collaboration rather than substitution.
Generative AI does not erase the human author; it transforms authorship into an act of curation, judgment, and refinement. The scholar guides the process, contextualises the result, and embeds meaning into otherwise neutral text. Just as editors polish a manuscript or peer reviewers enhance scholarly argument, AI provides scaffolding that the author must refine through critical engagement. The distinction between human authorship and mechanical assistance remains clear: the mind behind the machine dictates the quality, coherence, and moral ownership of the final product.
Moreover, higher education has long-accepted tools that mediate intellectual production: citation managers, grammar checkers, statistical software, and translation programmes. Each facilitates thought without replacing it. Generative AI should be viewed in the same lineage: an extension of human cognition, not its counterfeit. Authorship in the age of AI therefore demands transparency, intellectual honesty, and the recognition that originality lies in how humans shape, not merely generate, ideas.
Ethical use and academic literacy: Teaching the new competence
If universities continue to treat AI as a forbidden instrument, they risk alienating a generation of students already immersed in the algorithmic culture. The appropriate response is not prohibition but pedagogy. Institutions should reorient the academic practice toward AI literacy, the ability to use these systems critically, ethically, and reflectively. This literacy extends beyond knowing how to prompt; it involves understanding data bias, citation ethics, and the epistemological limits of machine-generated knowledge.
Teaching AI literacy would parallel existing forms of academic skill-building. Students learn to cite sources, evaluate evidence, and distinguish argument from opinion; similarly, they must learn when and how to disclose AI assistance. Universities should encourage transparent acknowledgment, such as stating that AI was used to generate preliminary drafts or outline structures, thereby normalising honesty and accountability. As with the use of statistical software in quantitative disciplines, the act of disclosure reinforces integrity, not diminishes it.
Furthermore, ethical AI literacy compels users to confront the biases embedded in training data and model outputs. A law student examining case summaries must verify their accuracy against authoritative sources; a political scientist using AI-generated definitions must assess their ideological framing. In this way, AI literacy cultivates critical vigilance, the very essence of the scholarly method. By institutionalising such practices, education can evolve to meet technological realities while preserving its moral foundations.
The real threat: Misuse, not machine use
The true danger to academic integrity does not reside in the machine itself but in the human temptation to misuse it. The ethical breach occurs when individuals conceal or misrepresent the extent of AI’s contribution, submitting unedited outputs as original work. This constitutes deception, not a technological crime. The same principle governs other academic violations: ghost-writing, plagiarism, or uncredited editing. The moral failure is human, not algorithmic.
Properly framed, AI can even promote inclusivity and equity in learning. Students for whom English is a second language, or those from under-resourced institutions, can use AI as a linguistic equaliser, clarifying grammar, testing structure, or improving expression. In this sense, AI can democratise access to the academic discourse. The same algorithm that undermines standards in one context can empower participation in another, depending on the ethics of its use.
Universities must therefore distinguish use from abuse. They should develop clear policies defining acceptable assistance, model disclosure formats, and train evaluators to recognise legitimate engagement. Blanket bans only drive usage underground, encouraging dishonesty rather than integrity. A mature academic culture must embrace transparency, adaptability, and context-sensitive regulation rather than reactionary fear.
Between tool and theft: Choosing the future of learning
To call generative AI ‘academic theft’ is to misunderstand both the academia and AI. Theft implies taking without consent, ownership, or contribution; generative AI, by contrast, operates within the consent and command of the human user. It generates possibilities, not property. The moral agency lies entirely with the scholar who formulates the prompt, verifies the response, and integrates it into coherent argumentation. Particularly within interpretive disciplines like the social sciences, where answers evolve through debate, AI cannot steal what is not fixed, it can only mirror, reformulate, and re-imagine.
The challenge before the academia then is not to resist the machine but to master it. The real revolution is intellectual, not mechanical. Universities that teach students how to think with AI, rather than fear it, will cultivate a generation fluent in both ethics and innovation. This requires policies that acknowledge shared authorship, encourage disclosure, and reward critical reflection rather than mechanical output.
(The writer is an attorney and a Senior Law Lecturer at the Colombo University)
–-------------
The views and opinions expressed in this article are those of the author, and do not necessarily reflect those of this publication