brand logo
AI and the future of social media

AI and the future of social media

26 Apr 2026 | By Dr. Chandrika Subramaniyan


The digital landscape has undergone dramatic transformation over the past two decades, with social media emerging as a dominant force in human communication, commerce, and information dissemination. 

Today, as Artificial Intelligence (AI) capabilities accelerate at unprecedented rates, social media platforms stand at a critical intersection of technological innovation and societal responsibility. 

AI systems now mediate billions of daily interactions, from personalised content recommendations to automated content moderation across platforms like Facebook, Instagram, TikTok, and Twitter. However, as Generative AI (GenAI) systems grow more sophisticated, they are being adopted at a pace that far outstrips our understanding of their emerging risks and consequences.


Current state of AI in social media


AI now underpins nearly every aspect of social media, where platforms use it to personalise content feeds, recommend posts and videos, analyse sentiment, and automate moderation, shaping what users see and how they interact online.

Recommendation systems are the most influential of these tools. By analysing browsing habits, engagement patterns, and social networks, they predict what will keep each user watching or scrolling. Platforms like TikTok and YouTube have perfected this approach, but their focus on engagement can deepen filter bubbles and contribute to social and political polarisation.

Natural Language Processing (NLP) adds another layer, enabling platforms to scan huge volumes of text, detect harmful content, and gauge public sentiment. Yet these systems still struggle with nuance and often reproduce biases found in their training data.

Computer vision extends AI’s reach to images and videos, supporting features such as facial recognition, virtual try-ons, and automated detection of harmful visual content.

Together, these technologies create powerful but opaque systems – ones that enhance convenience and safety while raising serious concerns about surveillance, transparency, and user autonomy.

AI-driven personalisation uses algorithms to tailor content feeds, advertisements, and even interface layouts to individual users, promising reduced information overload and higher engagement. Yet this same personalisation brings serious risks. By repeatedly reinforcing user preferences, platforms can trap people in ‘filter bubbles,’ limiting exposure to diverse viewpoints and contributing to polarisation. 

AI-powered recommendations also shape consumer behaviour in subtle ways. When algorithms amplify one user’s choices, they can quickly turn individual preferences into broader social trends. 


AI’s role in content moderation 


Social media platforms now depend heavily on AI to manage the enormous volume of posts generated each day. Machine learning and NLP tools help flag hate speech, misinformation, and violent content, allowing platforms to respond more quickly than human moderators alone could.

At the same time, these systems can reflect and amplify bias, disproportionately targeting content from marginalised communities. Their opacity also leaves users uncertain about why their posts were removed, and limited appeal processes offer little opportunity for redress.

AI is also central to misinformation detection. Models trained on labelled data can identify many forms of fake or deceptive content, yet they still struggle with more sophisticated misinformation and with claims that vary across languages and cultural contexts. 


GenAI and content creation 


GenAI is rapidly reshaping how content is produced and consumed on social media. As content creation has become more accessible than ever, anyone can produce polished material without specialised skills and reach wider audiences with far less effort.

These benefits come with serious risks. As AI-generated material becomes more convincing, the line between authentic and synthetic content grows increasingly blurred. This raises concerns about misinformation, especially as generative models can produce realistic deepfakes capable of manipulating public opinion and destabilising democratic processes.


Virtual influencers and AI-generated personas


Virtual influencers – AI-generated characters with lifelike digital identities – are becoming a new kind of presence on social media. Powered by technologies, these personas can create content, interact with audiences, and promote brands without the constraints like getting tired, falling ill, or personal setbacks faced by human creators, and they can also operate continuously at minimal cost.

These AI-driven influencers are already being used in areas like health communication and marketing. They can deliver prevention messages and product recommendations effectively and at lower cost, but their rise raises concerns about authenticity, consent, and the displacement of human labour.


Immersive virtual world 


The immersive virtual worlds where people interact in real time are becoming another major space where AI will reshape social media. Technologies make it possible to build rich virtual environments and intelligent characters that respond naturally to users by adapting scenes, interactions, and experiences to individual preferences, creating highly personalised virtual spaces.

AI also enables new forms of engagement in the metaverse – the 3D digital environment – where users, represented by avatars, socialise, work, and provide customer service or entertainment through automated tools that generate environments and contents on the fly. 

However, these settings introduce new privacy and safety risks. Continuous tracking of user behaviour, the capture of biometric data through avatars, and the potential for psychological manipulation in highly optimised environments raise serious concerns.


Algorithmic bias 


Algorithmic bias in social media arises from several sources – biased training data, design flaws, and limited testing across diverse user groups. As machine learning models learn from historical patterns, they often reproduce the same inequalities embedded in the data, reinforcing discrimination rather than correcting it.

Studies show that users from different demographic groups receive systematically different content, even when their preferences are similar. Content moderation systems show similar disparities, with posts from minority communities more likely to be flagged or removed. 

Gender bias is another significant concern. Eurocentric beauty ideals often affect young women, especially in non-Western contexts where these standards may conflict with local cultural norms.


Information manipulation


Misinformation on social media has become a major threat to informed public debate and democratic stability. AI systems intensify the problem in two ways: they amplify false content through engagement-driven recommendation algorithms, and they enable the creation of highly convincing deepfakes and synthetic media.

Coordinated misinformation campaigns can shape voter attitudes and influence electoral outcomes. During elections and referendums, social media platforms often turn into arenas where competing false narratives circulate at scale, supported by AI tools that streamline both production and distribution and lead to high political stakes.

False health information about vaccines, treatments, and disease transmission can have immediate risky public health consequences. During the Covid-19 pandemic, inaccurate vaccine information spread rapidly online, with AI-powered recommendation systems boosting sensational but medically incorrect content.


Data privacy 


Social media platforms collect vast amounts of personal data, directly shared by users, as well as behavioural traces generated as they browse, click, and move across devices. Major platforms continue to harvest and process these extensive data and consent mechanisms. 

AI systems deepen this surveillance by inferring sensitive traits such as health status, political preferences, or emotional states, even when users have never disclosed them. 


Recommendation systems


Fairness in recommendation systems means balancing competing priorities – user satisfaction, platform profits, creator visibility, and wider social values. Some companies have started testing fairer models. Netflix, for example, has introduced diversity measures to reduce bias while maintaining performance, showing that fairness and effectiveness can coexist, even if trade-offs remain.

Most platforms still prioritise engagement above all else, often pushing fairness aside. Without transparent metrics and independent audits, it remains difficult to tell whether these efforts reflect genuine reform or simply symbolic compliance.


(The writer is a solicitor and community mediator. Drawing on her knowledge and skills in various areas, she has trained and taught law, leadership, IT, and community management in TAFE institutes and universities in Sri Lanka, Australia, and India. She is currently a Director of the Western Sydney Local Health District Board and SydWest Multicultural Services, and is involved with Riverlink and Participate Australia. She is also an Advisory Member of the Justice Department of NSW, the Cumberland Council, and many other organisations, as well as a Fellow of the Asian Institute of Alternative Dispute Resolution)


(The views and opinions expressed in this article are those of the writer and do not necessarily reflect the official position of this publication)





More News..