The developing law on hate speech
20 July 2022
Introduction
Harmful online content is a fact of modern life. In the pre-digital world, if members of the public wished to have their opinions shared with the outside world, they would ordinarily have to write a letter to a newspaper, or contribute a comment to a radio or television broadcast, all of which would be subject to supervision and editing by the media organisation before it would be published. Social media, however, has allowed for publication of content that it not only instantaneous, but is also unfiltered and uncensored.
The Covid-19 pandemic has resulted in even increasing use of the internet as a means for people to communicate, shop and conduct business, with all social media platforms announcing ever-increasing user numbers.[1] With increased usage, of course, comes increased harmful content, with new instances of hate speech in particular making the news on a regular basis.
What is hate speech?
Hate speech is generally considered to describe various forms of communication – be they verbal or written – which have at the root the incitement and/or promotion of hatred or violence towards a person or persons by virtue of their position in a defined section of society.[2]
The regulation of hate speech has always involved a delicate balancing act, given the strong protection offered to speech in general, particularly by EU legal instruments and institutions. Freedom of expression is specifically protected under article 10 of the European Convention of Human Rights, and Article 11 of the EU Charter of Fundamental Rights. The Strasbourg Court in Handyside v UK declared that the freedom to impart information and express ideas extends to those “that offend, shock or disturb the State or any sector of the population.”
The right to freedom of expression has regularly been cited by social media platforms as a defence for their perceived reticence to block content which others consider to amount to hate speech. The fact that platforms have largely been allowed to self-regulate when dealing with much material has led to calls for a more hands-on approach by the legislature. The next few years, however, will see concerted efforts to regulate the online world, particularly the giant social media platforms, by imposing mandatory requirements on the manner in which they deal with harmful content, and significant penalties if they fail to do so. This, as well as existing and putative Irish legislation concerning hate speech, is considered below.
The role of internet intermediaries:
As suggested above, online platforms such as Facebook and Twitter regularly come under regular fire for the manner in which they host abusive, racist commentary. The “Black Lives Matter” movement brought a new focus on the issue, and in June 2020 pressure was brought on advertisers to withdraw their advertising from social media platforms which hosted such speech. This led to a number of companies such as Coca Cola, Starbucks and Diageo announcing a suspension of advertising, primarily on Facebook, in response to a perceived lack of progress in dealing with hate speech on their platform. This in turn led to Facebook expanding its definition of what it held to constitute hate speech and clamping down on advertising that promoted hatred towards protected categories.
As a result, it should be noted that Meta’s own definition of what constitutes hate speech differs from that contained in the 1989 Act: “We define hate speech as a direct attack against people – rather than concepts or institutions – on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease. We define attacks as violent or dehumanising speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation.”[3]
This definition appears broader in respect of the categories that it covers, and also sets the bar lower in respect of what constitutes “hate speech”, as it does not appear to require the promotion of hatred or violence. The extent of the problem can be seen by Meta’s own figures, which reveal that in the first three months of 2022, they dealt with over 15 million individual pieces of hate speech content on their Facebook platform.[4] Of this content, Meta claims that it identified and blocked 95.6% before it appeared on the platform, with only 4.4% having to be reported to it by users following publication.[5] It would appear, therefore, that they are very efficient in blocking the appearance of such material.
It should be remembered that online platforms such as Twitter and Facebook will not be held automatically liable for any hate speech content posted by their users. Under Article 14 of the E-commerce Directive, such platforms may be held liable for hosting such material only if (a) they have been informed of its existence by another user and, (b) having been so informed, fail to act “expeditiously” to remove it. It appears as though this immunity will be retained by the upcoming Digital Services Act (see below) which will update the E-Commerce Directive for the first time in over 20 years.
Current legislation
In this jurisdiction, the piece of legislation which governs hate speech is the 33-year-old Prohibition of Incitement to Hatred Act 1989.
Section 2(1) describes the offence of ‘Actions likely to stir up hatred’, and provides that it is an offence:
a) to publish or distribute written material,
b) to use words, behave or display written material—
i) in any place other than inside a private residence, or
ii) inside a private residence so that the words, behaviour or material are heard or seen by persons outside the residence,
or
c) to distribute, show or play a recording of visual images or sounds,
if the written material, words, behaviour, visual images or sounds, as the case may be, are threatening, abusive or insulting and are intended or, having regard to all the circumstances, are likely to stir up hatred.
The “hatred” under consideration in the Act reflects the grounds for discrimination under the Equal Status Act 2000. It is defined in the Act thus: “’hatred’ means hatred against a group of persons in the State or elsewhere on account of their race, colour, nationality, religion, ethnic or national origins, membership of the travelling community or sexual orientation…” What will be held to be hate speech, therefore, is curtailed by two limitations:
a) It must be directed at a member of a protected category;
b) It must be intended, or likely to, stir up hatred.
Online hate speech
There is obviosuly no specific reference to online hate speech in the 1989 Act, but it is sufficiently technologically-neutral that it could be applied to online speech. The term “publish” is defined as meaning “to publish to the public or a section of the public.” This would appear to rule out private communications with individuals via direct messaging, a fact which would explain why the Act was not used in the recent case involving racial slurs that were sent to well-known English footballer Ian Wright. In February 2021, an 18-year-old Tralee youth was accused under section 10 of the Non-Fatal Offences Against the Person Act 1997 of sending threatening and offensive messages to Mr. Wright’s Instagram account. The messages used highly-offensive racist language against Mr. Wright, arising out the teenager’s disappointment at having lost a game of FIFA soccer on his Playstation. Mr. Wright publicised the 20 or so individual messages on his public Instagram page the following day. Tralee district court, however, guided by a report from The Probation Service, applied the Probation Act and left the defendant without a conviction.
Irish Case law
This relatively lenient approach has echoes of the single reported case of online speech which has been dealt under by the 1989 Act. This occurred in September 2011, when a local man appeared before Killarney District Court. He had admitted to setting up a Facebook Group page Facebook Group page entitled ‘Promote The Use of Knacker Babies as Bait’ which suggested that traveller children could be used at feeding times in a zoo – a title which was clearly grossly offensive to the travelling community. The page had the lowest possible privacy settings, so the content was viewable by any internet user. A total of 644 people joined the group before it was taken down by Facebook.
While the Court considered the man’s behaviour to be ‘obnoxious, revolting and disgusting’, it held that there was a reasonable doubt as to whether the defendant intended to incite hatred towards members of the travelling community, and his reactions should be considered to be a ‘once-off’ event. The court’s reliance on Mr Kissane’s lack of intent is surprising, given the fact that the relevant provision of the 1989 Act allows for the offence to be committed where stirring up hatred is intended or likely, meaning that intent is not a pre-requisite for the offence to be committed.
Case law in other jurisdictions
This decision can be compared to a Scottish case from around the same time. On a Facebook Group Page entitled ‘Neil Lennon Should be Banned’, which was critical of the then Celtic football club manager, Stephen Birrell made sectarian comments about Catholics. His posts stated, inter alia: ‘Hope they (Celtic fans) all die. Simple. Catholic scumbags ha ha’ and ‘Proud to hate Fenian tattie farmers.’ Mr Birrell was given an eight-month prison sentence.
In the more recent UK case of R v Davison,[6] the defendant published various racist remarks on his Instagram account directed toward members of the Muslim faith, which were accompanied by a photograph of him holding a shotgun. The Court of appeal upheld a sentence of four years’ imprisonment. It rejected a submission by counsel for the appellant that it should be distinguished from R v Bitton[7], in which the defendant had been handed a two year six month sentence for posting offensive material on Twitter, on the basis that Mr. Bitton had twice the number of followers, and that his Twitter profile did not have privacy settings in use. The Court pointed out that “although the appellant had a limited number of followers on his protected Instagram account, the postings had the potential for further dissemination of the material across social media.”[8] It also drew attention to the fact that the person who reported the posts to the police was not a follower of Mr. Davison on social media.
In Beizaras & Levickas v Lithuania,[9] the European Court of Human Rights recent restated the requirement for contracting states to have robust measures in place to combat hate speech. The two male applicants had posted a photograph of themselves on Facebook, kissing each other. This drew a series of offensive, homophobic comments, variously describing the participants as “faggots” who should be “shot”, “castrated” or “sent to a gas chamber.” When requested to do so by the applicants, the domestic Lithuanian authorities refused to do prosecute the posters, describing the speech as “amoral” and “improper”, but not amounting to hate speech. The Strasbourg court upheld the complaint under articles 8 and 14, and also criticised the domestic authorities for failing to appreciate the impact that the placing of this hate speech on Facebook could have on a victim. The Court drew a telling comparison between the “bigoted attitude” of the users who posted the comments and the “same discriminatory state of mind” of the authorities for refusing to prosecute them.
Proposed legislation
The Criminal Justice (Hate Crime) Bill 2021:
In October 2019, the Minister for Justice and Equality launched a public consultation as part of a plan to update Ireland’s criminal law on hate speech and hate crime. The Criminal Justice (Hate Crime) Bill 2021 is intended to replace the 1989 Act, and is currently before An Seanad. The Department of Justice’s Joint Committee on Justice recently recently concluded its pre-legislative scrutiny of the General Scheme in April 2022.
There are two central aspects to the Bill – Part 1, which deals with Incitement to Hatred, and Part 2, which deals with Hate Crime. The latter proposes to create specific aggravated offences for existing crimes contained within the Non-Fatal Offences Against the Person Act 1997, the Criminal Damage Act 1991, and the Criminal Justice (Public Order) Act 1994. The aggravated offence will be engaged if “the motive of the perpetrator in committing the offence consisted in whole or in part of prejudice on the part of the perpetrator against a protected characteristic.”
Part 1 of the Bill proposes to update the provisions of the 1989 Act by simplifying its provisions and enhancing its sentencing options, while retaining the necessity that the hatred be aimed at one of the protected characteristics. It removes the reference in the 1989 Act to a “private residence”, and instead states that the offence can be committed if the material is communicated “to the public or a section of the public by any means.”
Head 3 of the General Scheme relates to hate speech, and provides that:
(1) A person is guilty of an offence who –
communicates to the public or a section of the public by any means, for the purpose of inciting, or being reckless as to whether such communication will incite, hatred against another person or group of people due to their real or perceived association with a protected characteristic.
(2) A person guilty of an offence under paragraph 1 shall be liable –
(a) on summary conviction, to a class A fine or imprisonment for a term not exceeding 12 months (up from 6 months in the 1989 Act), or both, or
(b) on conviction on indictment, to a fine (unlimited)or imprisonment for a term not exceeding 5 years (up from 2 years in the 1989 Act), or both.
Sub-section 5 provides for the defences available to a person or a corporate body (such as a social media platform) which disseminates the material. For a person, it will be a defence if the material consisted solely of:
– a reasonable and genuine contribution to literary, artistic, political, scientific, or academic discourse,
– an utterance made under Oireachtas privilege,
– fair and accurate reporting of court proceedings,
– material which has a certificate from the authorising body, in the case of a film or book, or
– a communication necessary for any other lawful purpose, including law enforcement or the investigation or prosecution of an offence under this Act
For a body corporate, it will be a defence if:
– the body has in place reasonable and effective measures to prevent dissemination of communications inciting hatred generally,
– was complying with those measures at the time, and
– was unaware and had no reason to suspect that this particular content was intended or likely to incite hatred
The provision in respect of a “body corporate” is drafted so as to more closely resemble the provisions of the E-Commerce Directive and the defence of innocent publication under the Defamation Act 2009. It is a considerable improvement over the equivalent Section 7 of the 1989 Act, which required that the offence be “proved to have been committed with the consent or connivance of or to be attributable to any neglect on the part of a person being a director, manager, secretary or other similar officer of the body corporate,” a requirement which appears almost impossible to establish.
It is notable that the Joint Committee has recommended that the provisions concerning hate speech be removed from the Bill, and included instead as an amendment to the existing 1989 Act, which would no longer be replaced but merely updated. The Committee appears to feel that the description of the legislation as a “Hate Crime” bill is confusing, as only Part 2 of the Bill bears this title, while Part 1 appears to be a separate offence. As speech under Part 1 is of itself an offence, while the offences under Part 2 require a separate criminal offence to have taken place, the Committee suggests that these two parts be separated into two different bills. It is unclear whether this recommendation will be adopted, and if it is this will clearly delay the enactment of this legislation.
The Online Safety and Media Regulation Bill 2022
Hate Speech is also an area that will be undergoing regulation under the proposed Online Safety and Media Regulation Bill, which will inter alia regulate the manner in which the large internet platforms deal with harmful online content. Under Section 44 of the Bill (which will be inserted in section 139 of the Broadcasting Act 2009) “harmful online content” includes “Material which it is a criminal offence to disseminate under the Offence Specific Categories listed in Schedule 3 of the Bill.” Schedule 3 includes the Prohibition of Incitement to Hatred Act 1989.
The Commission will draw up an Online Safety Code to govern the standards and practices to be observed by designated online services. This Code will be designed to ensure that the services take measures to minimise the availability of harmful content, such as hate speech, on their platforms, and provide guidelines as to user complaint/ issue handling mechanisms. The manner in which user complaints are handled by the online services will be monitored by the Commission, and as well as specifying timelines under which complaints must be dealt with, the Commission may also have the power to demand that individual pieces of content are removed.
It is important to stress that, as the Bill currently stands, it is a “systemic” piece of legislation. It aims to monitor the operation of online platforms, rather than provide an avenue to pursue claims against those platforms. The Commission will not investigate individual breaches brought to their attention by the public, but instead the data pertaining to such breaches will be used generally to assess the manner in which the social media platforms are complying with their obligations under the Online Safety Code.
The EU Digital Services Act
As discussed above, the Digital Services Act will update the E-Commerce Directive, while retaining the same immunity from liability for internet intermediaries currently provided by Articles 12-14 of the Directive. It does, however, propose to create a new class of internet intermediary – the “online platform” – which is defined as “a provider of a hosting service which, at the request of a recipient of the service, stores and disseminates to the public information.’ The Act clearly has social media platforms in its sights, which one must remember were not in existence at the time that the E-Commerce Directive was adopted in 2000.
The Digital Services Act appears as though it will apply similar requirements, similar to those provided by the Online Safety and Media Regulation Bill, to what it classifies as “large online platforms.” These are platforms which have over 45m active monthly users, which will therefore include the main social media platforms. Such platforms will be required to conduct an annual assessment on any significant risks stemming from the functioning and use of their services, to include the hosting of illegal content such as hate speech, and Member States will be permitted to fine the platforms up to 6% of their annual turnover for failure to comply with the provisions of the Act.
Conclusion
While the main social media platforms have shown themselves to be adept at preventing the publication of hate speech by users, it appears as though the era of self-regulation on their part of such content is drawing to a close. Both the Online Safety Bill and the EU Digital Services Act will signal the beginning of greater independent oversight as to how such material is dealt with, with potentially large fines being imposed on platforms which fail to either block it or deal with it expeditiously.
This will provide some comfort to those sections of society which are regularly targeted by hate speech, whose dissemination is rendered more unfiltered, widespread and immediate than ever before via social media channels. When such material is published, however, it is important that the Irish courts deal with it as robustly as it is dealt with in other jurisdictions. In this regard, the enhanced penalties provided for by the proposed The Criminal Justice (Hate Crime) Bill 2021 would appear to be a move in the right direction.
Ends.
[1] For example, in the last quarter of 2020, Twitter saw its number of active daily users increase by 5 million up to 192 million, leading to a 28% year-on-year increase in advertising revenue. Source: https://www.socialmediatoday.com/news/twitter-posts-record-revenue-result-in-q4-maintains-usage-growth/594824.
[2] The Cambridge dictionary defines it as “public speech that expresses hate or encourages violence towards a person or group based on something such as race, religion, sex, or sexual orientation”.
[3] https://transparency.fb.com/en-gb/policies/community-standards/hate-speech/
[4] https://transparency.fb.com/data/community-standards-enforcement/hate-speech/facebook/#prevalence. Instances of hate speech content on Facebook rose sharply from 5.5m in the last quarter to 2019 to 31.5m in the second quarter of 2021, but have been steadily decreasing since then.
[5] https://transparency.fb.com/data/community-standards-enforcement/hate-speech/facebook/#prevalence.
[6] R v Davison [2020] EWCA Crim 665.
[7] R v Bitton [2019] EWCA Crim 1372.
[8] R v Davison [2020] EWCA Crim 665, para 17(3).
[9] Beizaras & Levickas v Lithuania (app no. 41288/15, January 2020).