brand logo
Are there any winners?: Deepfakes: The influence on the Indian Election

Are there any winners?: Deepfakes: The influence on the Indian Election

28 Jun 2024 | BY Sandun Arosha Fernando


India's recent General Election provides a glimpse into the artificial intelligence (AI) powered future of democracy, where politicians leverage audio and video deepfakes to connect with voters, often without their awareness of interacting with a digital clone. India’s political parties have exploited AI to warp reality through cheap audio fakes, propaganda images, and AI parodies. 

But, while the global discourse on deepfakes often focuses on misinformation, disinformation, and other societal harms, many Indian politicians are using the technology for a different purpose: voter outreach. Across the ideological spectrum, they are relying on AI to help them navigate the nation’s 22 official languages and thousands of regional dialects, and to deliver personalised messages in far-flung communities.


As India wrapped up the world's largest Election this month (on 5 June of this year), with over 640 million votes tallied, it became clear how political parties and factions utilised AI technologies, offering insights for other nations. Campaigns extensively employed AI, featuring deepfake portrayals of candidates, celebrities, and even deceased politicians. Estimates suggest that millions of Indian voters were exposed to these deepfakes. While there were concerns about widespread disinformation, the majority of campaigns, candidates, and activists harnessed AI in a positive manner during the Election. They utilised AI for standard political activities, such as mudslinging, but predominantly to forge stronger connections with voters.



Exploring three instances


In a stunning revelation that shook the political landscape of Tamil Nadu (TN), a deepfake video surfaced ahead of the Indian Election, featuring the late Muthuvel Karunanidhi, a towering figure in the State's politics. Claded in his signature attire of dark sunglasses, white shirt, and golden-yellow shawl, Karunanidhi appeared in an eight-minute video purportedly congratulating a friend and fellow politician on the launch of their autobiography. The video, seemingly authentic at first glance, took a sinister turn as Karunanidhi proceeded to endorse his son, Muthuvel Karunanidhi Stalin, the current leader of the State. This endorsement carried significant weight, given M. Karunanidhi's revered status and enduring legacy in TN politics, even though he had passed away in 2018. The deepfake technology seamlessly replicated M. Karunanidhi's voice and mannerisms, leaving viewers bewildered and vulnerable to manipulation. The implications of this deepfake were profound, as it blurred the lines between reality and deception, exploiting the emotions and sentiments of the electorate. Supporters of M.K. Stalin seized upon the video as a validation of his leadership, while critics decried it as a cynical ploy to sway public opinion. The controversy surrounding the Karunanidhi deepfake underscored the growing threat of AI generated misinformation in Indian elections.

Deepfakes facilitated Shakti Pratap Singh Rathore's political campaign by revolutionising his voter outreach strategy. Although Rathore is not contesting in the current election cycle, he is among the 18 million Bharatiya Janata Party (BJP) volunteers entrusted with bolstering Prime Minister Narendra Modi's Government's grip on power. Traditionally, Rathore would have spent extensive time traversing Rajasthan, a desert State roughly the size of Italy, to speak with voters individually, reminding them of how they have benefited from various BJP social programmes — pensions, free tanks for cooking gas, and cash payments for pregnant women. However, with the aid of Divyendra Singh Jadoun's deepfake technology, Rathore's task became significantly streamlined. Instead of physical interactions, Rathore spent 15 minutes recording a brief session where he discussed key election issues, guided by Jadoun's prompts. The crucial element was not Rathore's physical presence but rather his voice, captured meticulously during the recording session. Jadoun utilised this data to generate personalised videos and calls, directly reaching voters on their phones. Through this innovative approach, Rathore could address voters by name, discussing issues pertinent to them, and advocating for BJP support, all without the need for extensive physical campaigning. This strategy not only saved time but also demonstrated the potential of deepfake technology in reshaping political communication and influencing voter behaviour.

Deepfake technology has altered the landscape of Indian elections by addressing linguistic barriers and expanding the reach of political candidates to voters across diverse regional languages. While AI bots may encounter challenges in accurately translating local dialects, they serve as crucial tools in bridging communication gaps. For instance, during electoral campaigns, Premier Modi utilised Bhashini, a Government backed AI tool, to translate his speeches delivered in Hindi into Tamil in real time. This allowed Tamil speaking audiences, particularly in the Southern and Eastern regions where Hindi is not widely spoken, to engage with his message effectively. Moreover, Modi's speeches were translated into several other regional languages such as Kannada, Bengali, Telugu, Odia, and Malayalam, amplifying his outreach. Additionally, Modi’s official app, NaMo, introduced AI powered chatbots before the Election. These chatbots played a pivotal role in disseminating information about the Government's policy achievements to a wider audience, contributing to the BJP's visibility and potentially garnering increased support among voters from linguistically diverse regions.


Have the parties had an increase in seats? 


Deepfake technology has become a prominent tool in political campaigns, influencing voter perceptions and behaviour. The impact of deepfakes on Indian elections has been particularly significant, as seen in the recent results.

In TN, the Indian National Developmental Inclusive Alliance (INDIA) experienced a sweeping victory, winning all 39 seats, a notable improvement from the 31 seats secured in the 2019 Election. This victory can be partly attributed to a deepfake video featuring the late M. Karunanidhi. The video, which appeared to show M. Karunanidhi endorsing his son M.K. Stalin, had a profound emotional impact on voters, reinforcing Stalin's leadership credentials and swaying public opinion in favour of the INDIA alliance. This instance underscores the potential of deepfakes to exploit the electorate's emotions and memories, significantly boosting the Party's performance at the Poll.

In Rajasthan, the National Democratic Alliance (NDA), led by the BJP, maintained a strong presence by winning 14 seats, compared to a complete sweep of 25 seats in 2019. Despite the reduced number of seats, the innovative use of deepfake technology by BJP volunteers like Rathore facilitated efficient and personalised voter outreach. Rathore's use of deepfake generated videos and calls enabled him to connect with voters individually, highlighting the benefits of BJP's social programmes. This strategic use of deepfakes likely contributed to the NDA's ability to retain a significant number of seats in a challenging electoral environment.

In Andhra Pradesh, the NDA made substantial gains, securing 21 seats, a dramatic increase from zero seats in 2019. Similarly, in Telangana, the NDA won eight seats, doubling their previous count from the 2019 Election. The success in these States can be linked to the effective use of AI tools and deepfakes to overcome linguistic barriers and deliver personalised messages to voters. For instance, Modi's use of the Bhashini AI tool to translate his speeches into regional languages helped the NDA reach a wider audience. Additionally, AI powered chatbots on Modi's app played a crucial role in disseminating information about Government policies, enhancing voter engagement and support.

In Odisha, the NDA won 20 seats, a significant increase from eight seats in 2019. The use of deepfake technology and AI tools for real time translation and voter outreach played a crucial role in this success. By addressing linguistic diversity and enabling personalised communication, the NDA managed to expand its influence and secure a greater number of seats.



The ease of creating deepfakes


The ease of creating deepfakes, as demonstrated by a recent experiment where a Professor attached to The Wharton School of the University of Pennsylvania (Ethan Mollick) produced a convincing replica of himself in just a few minutes and with minimal expense, underscores the alarming accessibility of this technology. By leveraging neural networks to mimic speech, facial expressions, and body movements, deepfake creation, once a labour intensive and costly process, has now been streamlined by machine learning algorithms and off the shelf software. This accessibility poses significant threats, allowing for the potential defamation, reputational damage, or manipulation of both public figures and everyday individuals.

Despite efforts to regulate deepfakes through legislation and technological advancements in detection, the challenges persist, as their rapid dissemination and elusive origins complicate effective control measures. Vigilance, media literacy, and critical scrutiny of online content are essential safeguards against the deceptive potential of deepfakes in an era where the boundaries between truth and manipulation blur.



What are deepfakes?


Deepfakes, a form of AI, are sophisticated tools used to fabricate convincing images, sounds, and videos, merging deep learning principles with the creation of deceptive media. By employing machine learning algorithms, deepfakes amalgamate manipulated visuals and audio, generating fabricated individuals and scenarios. This technology, often deployed for malicious intents, such as disseminating false information or propaganda, holds grave implications for public trust and security. 

Deepfakes have been implicated in various illicit activities, including scams, non-consensual pornography, election manipulation, social engineering, disinformation campaigns, identity theft, and financial fraud. Conversely, they can be ethically utilised for purposes such as parody, technological demonstration, historical recreations, and creative simulations, provided that proper disclosure and consent are maintained. Efforts to combat deepfakes encompass social media regulations, research initiatives, filtering programmes, corporate awareness, and legislative measures aimed at curbing their harmful impact and safeguarding against their misuse. 


(The writer is a media and fact-checking consultant)


---------------------------------------------------------


The views and opinions expressed in this article are those of the author, and do not necessarily reflect those of this publication



More News..