Christchurch Call: Timely action or strategic vagueness?
Online/Offline Column by Nalaka Gunawardene, On 15 March 2019, the
mosque shootings in Christchurch, New Zealand, killed 50 innocent worshippers and injured 50 more. The gunman – a 28-year-old Australian man, known to be a white supremacist –live-streamed the first 17 minutes of the attack on Facebook.
Tranquil New Zealand was devastated by this atrocity, but they reacted with restraint and resolve. In less than a month, the country’s Parliament passed a law banning most assault rifles and semiautomatic weapons. Even before this new law, New Zealand had much stronger gun laws than the US, where reforms to gun laws have long been resisted by the powerful gun lobby.
And two months later to the day, on 15 May 2019, New Zealand Prime Minister Jacinda Ardern, together with French President Emmanuel Macron, brought together heads of state and government and leaders from the tech industry to Paris to adopt the Christchurch Call.
The Call is a commitment to eliminate terrorist and violent extremist content online. It is not legally binding, yet is a significant step forward in global policy responses to abuses of the internet.
It rests on the conviction that a free, open, and secure internet offers extraordinary benefits to society. As the description states: “Respect for freedom of expression is fundamental. However, no one has the right to create and share terrorist and violent extremist content online.” (Full text: www.christchurchcall.com)
The Christchurch Call was signed by 17 founding countries which included Canada, the European Commission, France, Germany, India, Indonesia, Japan, New Zealand, and the UK.
While the US Government declined to attend (expressing concerns that US compliance could create conflicts with free speech protections in the Constitution), most US tech companies came on board. These include Amazon, Facebook, Google, Microsoft, Twitter, and YouTube.
These companies pledged to work more closely with one another as well as with governments to ensure that their platforms and technologies do not become unwitting conduits for terrorism and other forms of extremism.
As we discussed on 24 March 2019, Facebook, Google, and Twitter were heavily criticised after they struggled to take down numerous copies and versions of the Christchurch shooting video. Prior to the attack, the shooter had posted his hate-riddled “manifesto” online that contained references to previous mass killings.
The Christchurch Call has three interlinked sets of commitments, for governments, for tech companies, and for them acting together.
Governments commit, most notably, to “counter the drivers of terrorism and violent extremism” through a series of measures. These involve strengthening the resilience and inclusiveness of societies to enable them to resist terrorist and violent extremist ideologies. The strategies include education, building media literacy to help counter distorted terrorist and violent extremist narratives, and the fight against inequality in society.
In other words, it is a multi-pronged response. At the same time, there is a clear commitment to the “effective enforcement of applicable laws that prohibit the production or dissemination of terrorist and violent extremist content”. Signatory governments pledge to do so in ways consistent with the rule of law and international human rights law, including freedom of expression.
Governments also undertake to prevent the use of online services to disseminate terrorist and violent extremist content, by pursuing regulatory or policy measures, awareness-raising and capacity-building aimed at smaller online service providers.
Governments would also encourage media outlets to apply ethical standards when depicting terrorist events online, to avoid amplifying terrorist and violent extremist content.
The tech companies signing the Christchurch Call also make a series of commitments, to take “transparent, specific measures” to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media and similar content-sharing services, including its immediate and permanent removal.
Among other things, platforms are to step up enforcing their own user codes of conduct, such as Facebook’s Community Standards. They would flag the consequences of sharing terrorist and violent extremist content online, and clearly outline policies and procedures for detecting and removing such content.
The companies also make a specific commitment to improve their oversight of live-streaming of audio or video content. In fact, the day before the Paris meeting, Facebook announced it would start restricting who can live-stream video on the platform. The company would enforce a “one-strike” policy, in which users who violate its rules – such as sharing content from known terrorist groups – could be prohibited from using its Facebook Live feature.
Platforms like Facebook have been accused of exploiting human tendency to seek like-minded ideas that reinforce existing prejudices. After signing the Christchurch Call, tech companies must review their automated software (algorithms) that may drive users towards and/or amplify terrorist and violent extremist content.
Both, governments and tech companies recognise the important role of civil society for research, advocacy, public outreach, and monitoring.
The Christchurch Call means well, but are such non-binding commitments enough to tackle this formidable problem? Also, can the Call’s provisions be misused by some governments to crackdown on dissent and political criticism?
Some commentators have noted how the commitment’s text has been written with “deliberate, strategic vagueness” that can dilute its effectiveness. Others felt it could give too much power to tech companies to censor online content without much or any oversight.
Adrian Shahbaz, Research Director at Freedom House, a US think tank working on free expression and other human freedoms covered by the US First Amendment, said he was “alarmed by the vague call for governments to ban more speech” in a way that could have “negative consequences for human rights”.
He told The Washington Post that while greater regulation of tech companies was needed, “we shouldn’t be calling on tech companies to remove content without also demanding that they act with far more transparency and accountability. Otherwise, companies will censor first and ask questions later, leaving users with little recourse to appeal poor decisions and uphold their right to free expression”.
This divergence of views illustrates different approaches to protecting freedom of expression on either side of the Atlantic. Europe has traditionally shown a greater willingness to regulate tech companies, while in the US companies are given more freedom to self-regulate themselves.
But it is now clear that tech companies with a global reach need to be held to higher standards of conduct and accountability than the mere corporate compliance regulations in their country of registration. What is the best mechanism for ensuring that?
Civil society concerns
Civil society and academic groups active on internet governance issues have found a plethora of procedural and substantive concerns in the Christchurch Call.
Many of these were raised during a closed meeting between Ardern and civil society and academic leaders on 14 May 2019. According to reports, she had spent hours listening to the issues that make governing platforms and online content particularly tricky and problematic from a human rights perspective.
Civil society leaders urged that attention must be paid to the governance of platform harms as a whole, not just “terrorist and violent extremist content” on social media.
Among other things, it was pointed out that “terrorism”, “terrorist content”, and “online extremism” should be clearly defined. The definitions of these terms vary greatly from country to country. Some authoritative governments use these to mislabel legitimate political criticism.
It was also stressed that governments should not conflate social media platforms with all internet infrastructures. “Broadening the scope of the Call beyond social media platforms can endanger the global and open nature of the internet.”
Africa Digital Policy Project Manager Anri van der Spuy, who was part of the civil society critique of the Christchurch Call, wrote in an op-ed on 20 May: “Let’s hope that it is a first step to enabling a broader discussion about platform harms without enabling overhasty governance responses, a broader discussion that will pay special attention to the users, communities, and regions that are most susceptible to online risks.” (Read full text at: http://bit.ly/CCCritiq)
Can the Christchurch Call’s ideals be reconciled with protections for freedom of expression? What is the right balance between human rights and security?
These questions will need to be debated openly and widely – including by us in Sri Lanka who now face our own challenges of curbing violent extremism online and offline.
(Science writer Nalaka Gunawardene has been chronicling and critiquing information society for over 25 years. He tweets from @NalakaG)