Blog | The Speech Coin: Freedom v Hate

Blog | The Speech Coin: Freedom v Hate

By Ellis Pike

In the wake of the horrific Christchurch attacks on the 15th of March that killed 51 people, the government will be looking at introducing new laws that make social media platforms responsible for the content that is posted.

Reasons for the Law

One of the most horrifying aspects of the tragedy was the fact that the shooter livestreamed the shootings on Facebook, and posted his manifesto on the platform. Facebook removed the video when asked, and they claimed to have removed versions of the video over 1.5 million times in the first day.[1] This shows the difficulty of controlling the spread of information posted on media platforms and highlights how easily people can be exposed to views and content that can be considered harmful and dangerous. Proposed changes to laws concerning social media content designed to force companies to be actively monitoring content and removing anything harmful. The goal of the law is to reduce the exposure that people have to hate speech and other harmful media.

The Current Legislation

New Zealand’s laws on freedom of expression can be found in the New Zealand Bill of Rights Act 1990 under s 14.[2] The Human Rights Act 1993 contains sections outlining prohibited grounds for discrimination in s 21 and s 61 The definition of hate speech is “likely to excite hostility against or bring into contempt any group of persons in or who may be coming to New Zealand on the ground of the colour, race, or ethnic or national origins of that group of persons”. Therefore, any expression that would have the same effect that is not based on race, but other grounds of discrimination prohibited under s 21 such as religion or sexual orientation would not be classified as hate speech. It is for this reason that Justice Minister Andrew Little wishes to review the Human Rights Act in order to strengthen the hate speech laws.[3]

The Pledge

Prime Minister Jacinda Ardern recently held the Christchurch Call to Action Summit in Paris, where 17 Governments and representatives from the world’s leading social media companies including Facebook, Google and Twitter signed the pledge (Link here).[4] The pledge outlined various actions that Governments could do such as enforcing and reviewing laws to ensure they are appropriate disincentives to committing a hate crime, as well as promoting ethical standards for all media and implementing industry standards. For social media companies actions included greater transparency, enforcing online community standards and establishing procedures to effectively respond to extremist content. There were also combined actions such as working together to find effective technological solutions that would be more efficient at preventing extremist content from being released online.

The Two Sides of the Same Coin

The purpose of social media is to connect, to allow people to share. Removing the ability of some to do this could be seen as undermining the central focus of these services.

Facebook’s original mission statement was “to give people the power to share and make the world more open and connected”.[5] This indicates that the ability for people to share was central to the company’s values. By taking a stance on what views are acceptable to share and what views are not arguably amounts to censorship. Although it is arguable that this is for the ‘good of society’, this reasoning could be the start of a slippery slope where increasing levels of censorship occur, and people are only exposed to certain views. An example of this can be seen in the escalation of Facebook’s content policy where the definition of hate speech has been changed to encompass a large range of comments as a result of political pressure. In 2013, Facebook released a statement outlining their approach to hate speech, “We prohibit content deemed to be directly harmful, but allow content that is offensive or controversial. We define harmful content as anything organizing real world violence, theft, or property destruction, or that directly inflicts emotional distress on a specific private individual”.[6] This focus on limiting extreme free speech that can cause actual harm has now evolved into the current definition provided by Facebook themselves; “hate speech as a direct attack on people based on what we call protected characteristics — race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. We also provide some protections for immigration status. We define attack as violent or dehumanizing speech, statements of inferiority, or calls for exclusion or segregation.”[7] This is a notable shift, broadening the boundaries of what content is prohibited and narrowing the content that is allowed.

More recently, Facebook’s mission statement was changed in 2016 to be more community focused “to give people the power to build community and bring the world closer together”.[8] This still gives the impression that people are free to share their own views and be supported in communities of like-minded people from all over the world. It must also be noted that Facebook CEO Mark Zuckerberg stated that connecting was no longer enough and that Facebook has a “responsibility to do more in the world” suggesting that the company’s focus is no longer solely on connecting but also helping to bring people from different backgrounds and beliefs together.[9] This, coupled with their content policy, indicates that Facebook aims to provide a safe space for all, where extremist views are removed before they can cause harm. Facebook’s signing of the Pledge also indicates that they are willing to do more in an effort to protect people from these views and prevent others from using social media to distribute extremist content.

The Subjectivity of “Offence”

Proposed changes to laws around hate speech are fundamentally built around the idea of what causes “offence” to certain classes within society. ‘Offence’ is a subjective term, and that can create issues when determining whether content online ought to be allowed. Something that may not offend one person may offend another, and therefore it may be impossible to prevent everyone in society from being offended without completely banning any type of controversial content. If we remove all content that is in the grey to safeguard anyone who might get slightly offended, this raises the question as to whether our ability to express important ideas will be constrained. Ideas such as evolution and climate change were significantly controversial and offensive, yet now they are accepted within society. Many would argue that a ban on anything offensive could come at the expense of new scientific, social, and intellectual breakthroughs.

Conversely, any offense to anyone is not acceptable; people should feel safe and be able to participate in communities both online and in the real world without fear. The actions of others to isolate and target certain groups within society should be prevented in order to maintain the community’s integrity. Without people coming together a community means nothing; should that be enough justification for removing harmful content?


Facebook’s changes successfully demonstrates the two sides of the argument: the protection of free speech and the eradication of hate speech. These two goals are not achievable at the same time and so actions must be taken to decide where the line must be drawn. The current balance is in favour of the eradication of hate speech. There are clearly issues with this such as certain entities deciding which views are acceptable and what are not. Women’s suffrage and gay marriage may have once been considered extremist views and now are accepted and normalised in society thanks to the freedom of expression. However, the views that are being limited today are often invoking violence and causing harm to specific minorities in society. Is it simply enough to state that freedom of speech will be protected when it is used to promote rights and increase equality? Or that the hate speech should be eradicated when used to segregate, promote violence and restrict the rights of those deemed to be lesser by the writer?










The Public Policy Club is a non-partisan club at the University of Auckland that aims to encourage, educate and involve students from all backgrounds in the education and development of political knowledge. The views and opinions expressed in this article are those of the author and do not necessarily reflect those of PPC.

Leave a Reply

Your email address will not be published. Required fields are marked *


We’re Recruiting!

  – APPLICATIONS FOR EXEC POSITIONS FOR 2016 HAVE NOW CLOSED –   If you care about politics and want to do something meaningful, apply

Re-Orientation Week

We are incredibly excited to host our very first orientation week from the 18th-22nd of July! Visit our PPC stall at the city campus recreation