Tell me something true – underneath the epidemic of false news
By Dimithri Wijesinghe
Shortly after the Easter Sunday bombings in Sri Lanka, the spread of inaccurate information was soon almost giving way to mass hysteria, and the Government took the drastic decision to temporarily restrict social media within the island.
In order to stop the spread of “false news reports”, access to Facebook, Facebook-owned WhatsApp and Instagram, YouTube, Snapchat, and Viber were all restricted. The shutdown did not appear to affect Twitter, which is not as widely used here. However, the restriction which lasted roughly a week was since lifted.
The move is not without precedent, as Facebook was briefly blocked last year as well.
The restriction was intended to prevent the spread of misinformation however, concerns were raised with regards to the Government making snap decisions in disrupting digital communication avenues, particularly those the public rely on to communicate with their family members and loved ones, to ensure they are safe and unharmed.
Speaking to Sri Lanka College of Journalism (SLCJ) Director, and veteran journalist of over 25 years, Shan Wijethunga about the spread of misinformation in times of crisis and the role of social media in such situations, he said: “Sri Lanka does not have any ethical code with regards to the sharing of news on social media in order to prevent fake news and incorrect interpretations spreading throughout the platforms. However, I’ve observed that in the past few days, many of the web journalists were careful enough to act responsibly in their role. True, there were some instances of misinformation, however, for the most part, there was an exercise of necessary caution and a lack of sensationalising.”
He further added: “As for the temporary social media restriction, we have to realise that social media provides a unique platform for persons to articulate their views and engage in proper dialogue, and what we need to have is an organised effort to develop some guidelines within the industry, to reap its potential benefits; this need not be a government-led effort.
“We heard that the President mentioned how he had hoped to lift the ban sooner; however, there still exists proof of incorrect information circulating. And so what we have here is an example we can take into consideration to not allow such things to happen, following which, we can challenge the Government if they choose to resort to such decisions in the future.”
Veteran journalist and South Asian Free Media Association (SAFMA) SL Chapter Coordinator Thaha Muzammil said: “As long as people stick to a code of conduct, social media can be a powerful tool in sharing information, and truthfully, this time around, it must be said that there was incredible caution exercised by bloggers, web journalists, and the like in sharing information. Even traditional media exercised restraint. As for the ban, I believe that the Government did the right thing at the time. However, there was a question in continuing it for an extended time period; particularly considering how many alternatives existed, which were not so stringently policed, for people to find their way around it.”
Centre for Policy Alternatives Senior Researcher Sanjana Hattotuwa, speaking to The Washington Post, provided that the restrictions, while preventing the spread of false information, also prevented the sharing of accurate information. “While a ban on social media helps to contain the spread of rumours, it also hampers efforts by journalists to push back on them,” he said.
Why we turn to alternate forms of news
Speaking to Verité Research Institute Head of Media Research Deepanjali Abeywardana, she stated that the primary reason the consumption of alternative forms of media was so prevalent was the level of mistrust of the mainstream media by the general public.
The BBC conducted research in Kenya, Nigeria, and India where they called upon 80 volunteers whom were then subject to interviews with regards to their usage habits of digital media such as Facebook and Facebook-owned WhatsApp.
The research demonstrated that the participants from all three countries did not attempt to nor exhibit care in further investigating the source of the material they were sharing.
It was further revealed that an estimated 30% of the messages sent on WhatsApp could be categorised under “common-man stories, nationalism, and cultural preservation”.
Indian nationals proved to have turned to alternate mediums in order to disseminate information amongst themselves primarily as a result of their mistrust of mainstream media. This resulted in the promotion of unverified stories, by persons believing to be sharing “the real story”.
Using social media for spreading fake news
WhatsApp was a major contributing factor that fuelled the violent acts that took place in what is now referred to as the “lynchings” on 1 July 2018 in Rainpada, India. The images that contributed to the acts of violence were later revealed to be the images of Syrian atrocities, particularly children whom were killed during the conflict five years prior.
Regardless, the images were shared under the guise of them being victims to a ring of kidnappers from India, who then murdered their victims.
In March this year, the killings in the mosques of New Zealand were live-streamed on Facebook and the social media site was unable to prevent the sharing of the video on other platforms such as Twitter and YouTube.
While New Zealand Prime Minister Jacinda Ardern did not block social media, much stricter regulations were called for in order to prevent the unhindered spread of dangerous content online.
Following the incident on Easter Sunday, one major issue posed by the existence of social media and the use of it to share information was the spread of false news credited to legitimate media outlets. An example being false SMS news alerts credited to legitimate news agencies being forwarded on WhatsApp.
There was also an example of an intelligence document that was shared on WhatsApp which shouldn’t have been accessible by the public, and while at the time, there was doubt of the credibility of said document, it spread like wildfire.
Combating the spread of false information
Following the video which spread on Facebook about the shootout in New Zealand, Australia took action to pass the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 (Act) on 5 April 2019.
The Act would police social media companies, implementing penalties with the possibility of imprisonment potentially up to three years and civil penalties up to 10% of a company’s global turnover.
The EU too is considering a clampdown, specifically on terror videos, where social media platforms would face fines if they did not delete extremist content within an hour. The British Government will run a consultation until 1 July on plans to create legalities pertaining to “duty of care towards users”, overseen by an independent regulator.
In China, sites such as Twitter, Google, and WhatsApp are blocked, and their services are provided instead by Chinese applications such as Weibo, Baidu, and WeChat. Chinese authorities also had some success in restricting access to virtual private networks (VPN). China is stringent in policing social media platforms, screening messages that are deemed to be politically sensitive. New words that are seen as being sensitive are added to a long list of censored words and are either temporarily banned or are filtered out from social platforms.
While China is at the extreme end of the spectrum, they have, to an extent, effectively taken hold of the spread of information from alternate means; however, it is no way a solution to the issue of the spread of false news in a democratic society.
In Sri Lanka
We spoke to TV Derana General Manager – Digital Janeeth Rodrigo, who highlighted the importance of verifying sources when sharing new items, particularly in a time of crisis.
“In times of crisis and emergency, one should only refer to traditional media outlets. This is primarily because television and radio channels are sanctioned by licenses issued by the Government and are therefore liable to lose licenses if a proper system of checks and balances are not in place. There are repercussions to spreading misinformation and therefore, they are liable to a penalty unlike alternative forms of media.”
He further commented on the effectiveness of the recent ban, stating: “It barely blunts the effective spread of such news, considering those who wish to be in the know will simply find their way around the system quite simply. However, during the imposed ban, Ada Derana saw a huge spike in users visiting the site, according to Google Analytics, a majority of who were first time users – that is those who have not accessed the site in the last 20 days.”
Do disruptions to digital communication achieve their intended purpose?
A report written by a research scholar and Stanford University Global Digital Policy Incubator (GDPi) Associate Director for Programmes Jan Rydzak provides that it cannot be said with any certainty that shutdowns are a solution to the spread of unrest. The report was published by AccessNow.org, an international non-profit, human rights, public policy, and advocacy group, last year.
The report goes on to state that since the Arab Spring began in 2010, governments carried out at least 400 shutdowns across more than 40 countries. These numbers included hundreds of brief shutdowns in India, which is where they first emerged as a localised response to the unrest in the northern region of Kashmir and subsequently, spread to most other states.
Research on early blackouts has shown that Egypt’s disappearance from the global internet in 2011 backfired, spreading protesters away from Tahrir Square and into numerous decentralised pockets of resistance. Coordination of the demonstrations swiftly moved from Facebook event pages to individual efforts in each neighbourhood. This proved impossible for security forces to subdue. Ten days later, the Mubarak regime fell.
In the Syrian Civil War, the Government used shutdowns as a weapon of war, following up with increased violence against civilians.
The regimes of Abdelaziz Bouteflika in Algeria and Omar al-Bashir in Sudan both resorted to shutdowns before imploding. The drastic measures did nothing to rein in the protests in either country. Instead, shutting off internet access may have accelerated their downfalls.
Since 2011, internet shutdowns have become more common in more countries, with 188 estimated to have happened in 2018.
Why are we susceptible to spreading false/misinformation?
Verité Research Institute Head of Media Research Deepanjali Abeywardana stated: “We fall victim to misinformation in times of crisis primarily because the general public doesn’t know how to verify the news that comes their way.
“There was a fantastic infographic on how to spot false news, and really what you must do is check your sources, check multiple sources. Journalists must depend on more than one or two sources to source their information. People are inclined to often share doctored images as they are often sensationalised and jarring, and it is essential to clarify the authenticity, particularly of such images.”
Why are we unable to tell the difference?
As part of an experiment run by Stanford University psychologist Sam Wineburg, the Stanford History Education Group, they set to find out why even the smartest among us were so bad at making judgments about what to trust on the web, and how we can get better.
The team was able to find that whoever the subject may be, considering aspects ranging from all ages to the extent of their digital know-how and levels of IQ, many fail to inquire into the legitimacy of content they consume, adding to research on the facts of online gullibility.
MIT cognitive scientist David Rand provides that on an average scale, about 20% of the time people are likely to believe fake news.
Further, other statistics include that six out of 10 links get re-tweeted without readers checking anything other than someone else’s summary of it. Another revealed that false stories travel six times as fast as true ones on Twitter, considering “lies do a better job of stimulating feelings of surprise and disgust”.
Wineburg concluded: “We don’t fall for false news just because we’re foolish. Often, it’s a matter of letting the wrong impulses take over. We are social animals, and the desire for likes can supersede a feeling that a story seems dicey.
Political convictions lead us to lazy thinking. But it has been revealed that there’s an even more fundamental impulse at play: Our innate desire for an easy answer. Humans like to think of themselves as rational creatures, but much of the time we are guided by emotional and irrational thinking.”
As something akin to a solution, Wineburg, in his research, provides that we need to cultivate an awareness of “digital pollution” on the internet.
“We have to get people to think that they are littering by forwarding stuff that isn’t true,” he said. The idea, he said: “Is to make people see the aggregate effect of little actions that, one by one, ill-advised clicks contribute to the web being a toxic place.”