Review of the Defamation Act 2009

14 March 2022

The long-awaited amendment of the 2009 Defamation Act appears to be drawing closer, with the Department of Justice having recently completed its “Report of the Review of the Defamation Act 2009.”[1]

The focus of recent commentary has been on the recommendation to abolish jury trials in defamation cases, while not recommending placing a cap on available damages or introducing a requirement for a Plaintiff to establish “serious harm” along the lines of UK legislation.[2]

The focus of this article, however, is on Chapter 7 of the Report which deals with the “Specific nature of Online Defamation”. This article will look at the current difficulties with the 2009 Act insofar as it deals with online defamation, and how the Department’s Report suggests dealing with them. The areas covered in this article are:

a) The various participants in online publication;

b) The defence of Innocent Publication under section 27;

c) The availability of injunctions under section 33;

d) Anonymous publication and the Norwich Pharmacal procedure;

e) Issues the Report fails to address.

Preliminary observations

The Report touches on nearly all areas of concern in respect of online defamation, and it is to be welcomed for that reason. It is regrettable, however, that little in the way of clear-cut recommendations are arrived at. Most disappointingly of all, perhaps, the Report betrays a fundamental failure to grasp many of the nuances of defamation as performed via the internet. 

Firstly, the Report falls into the common trap of referring to the myriad of participants in the process of online publication in an ill-defined manner. The terms “internet service provider,” “webhosting company”, “online platform,”, “internet platform provider” and “website operator” are variously used without any definitions, or explanation as to if or how they differ from each other. This is particularly problematic in relation to the use of the term “Website Operator” in the specific context of the s.27 defence of Innocent Publication, considered below.

Difficulties are not confined to terminology, with questionable use of case law and legislation also in evidence. In its review of the “uncertainty” of the law surrounding the Article 14 “hosting” defence of the E-Commerce Directive, for example, the Report refers to the ECtHR decision in Delfi v Estonia,[3] and the CJEU decision in Papasavvas.[4] Neither of these cases actually considered the E-Commerce Directive – in Delfi, the domestic court had already held that it did not apply to the applicant news website, while in Papasavvas the action did not concern material posted by third-party users, so the “hosting” defence was irrelevant. In circumstances where more recent and more relevant case law is readily available, this is disappointing.[5]

In other cases, the Report simply falls into error. In dealing with the Norwich Pharmacal procedure, for example, the Report describes the eponymous case as concerning a breach of copyright, where it actually concerned the breach of a patent.[6] The report states that Norwich Pharmacal relief is granted “sparingly”, and as evidence of this cites a piece of commentary from 2015. With several more recent cases available to the Committee, which point to the increasingly frequent granting of such relief by the courts, it is unclear why it based its opinion on this piece of commentary alone. The Report also states that “Norwich Pharmacal orders can be costly for a plaintiff, who is usually expected to bear the costs of the service provider as well as their own.” Again, this comment is of dubious accuracy given recent developments in case law.

The Report deals with some legislation in a similarly unsatisfactory manner. The important issue of defamatory material posted online by anonymous internet users was the subject of recent, widely-publicised draft legislation in Australia, the Social Media (Anti-Trolling) Bill 2021. While the Report does refer to older Australian legislation –  the Model Defamation Provisions 2020 – it makes no reference to the more relevant 2021 Bill. This omission is baffling, and appears to suggest that the Department was simply not aware of it.

a) The various participants in online publication

The Report recommends that the requirement for providing online publication should be clarified, and that the definitions and potential liability of, and defences open to, “authors”, “editors” and “publishers” be clarified. This is clearly aimed at rectifying the much-complained of issue with the s.27 defence of Innocent Publication in the 2009 Act, which provides a defence for such parties without defining any of those terms, and instead relying on examples of what they are not.

The fundamental issue with these terms is that they appear to refer to the roles in a traditional newspaper or book publishing company, where the employer itself had direct control over, and direct liability for, all the content which appeared in its pages. With the advent of websites which host content generated by third-party users, the terms “editor and “publisher” in particular have become more problematic.

S. 27(2)(c) provides that the defence of not being the “author, editor or publisher” of the material will be open to an internet intermediary so long as they were “responsible for the operation or provision only of any equipment, system or service by means of which the statement would be capable of being retrieved, copied, distributed or made available.” (my emphasis) The difficulty here is caused by the word “only”, which appears to suggest that intermediaries which go further than to simply provide a platform for users to post their own content will not have the defence open to them.

Companies such as Meta (which operates Facebook, Instagram and WhatsApp), Twitter, LinkedIn, TikTok and Google (which operates YouTube) appear to do more than simply provide the platform to users. They monitor, process and analyse all data pertaining to the use of whatever platforms they operate, and use this data to create targeted advertising campaigns for clients.[7]

In circumstances where these companies clearly go further than a strict interpretation of simply providing electronic equipment, it is questionable as to whether they should be able to avail of this defence. 

It is worth remembering that the s. 27 defence was being drafted at a time when the dominance of social media platforms could scarcely have been predicted (Facebook was only launched in 2005, Twitter in 2006), so it is understandable that this section would not specifically address the issues they create. Disappointingly, there is no reference to this difficulty in the Report’s recommendations.

b) The defence of innocent publication under section 27 for “Operators of Websites”

The Report recommends that the s.27 defence should be extended to “operators of websites”. As the Report states, such operators already avail of the defence, so this recommendation would do little more than codify an already widely accepted practice. More importantly, there are two difficulties with this suggestion. 

Firstly, the Report says that “such a defence already exists in England and Wales”, a clear reference to the s.5 defence under the UK Defamation Act 2013. The s.5 defence for “Operators of Websites”, however, is not akin to the defence of Innocent Publication, the latter being echoed in the UK by s.1 of the 1988 Act. Instead, s.5 is a stand-alone defence with different requirements to that of Innocent Publication. 

To simply incorporate “operators of websites” into the s.27 defence is to miss the point of the analogous defence in other jurisdictions for these operators. The focus of those provisions, and particularly the new Social Media (Anti-Trolling) Bill 2021 in Australia[8], is to provide a defence for such operators only under specific circumstances. This involves a clear Complaints Mechanism being in place which is required to be followed by both the user and the website operator, and is designed to bring the complainant and the author of the material together as expeditiously as possible. 

A central aspect of the analogous procedures in the UK and Australia is the requirement that the website operator knows the identity of the user who has posted the comment, even if they did so anonymously, so that the complainant will have an identifiable defendant against whom to bring proceedings. If the operator does not possess this information, it appears that its defence will fail. 

There is currently no requirement for social media platforms to verify the identity of users who sign up to their platforms, many of whom do so simply with an email address. This is a source of frustration to complainants who, even after compelling a platform to disclose such information as they have about an anonymous user, discover that this information is of little value in seeking to serve the latter with proceedings.

There is, unfortunately, no reference in the Report to the requirement specified in the UK Act and Australian Bill, and it thus appears to miss the point that at the heart of this Complaints Mechanism is a requirement for anonymous users to be ultimately identifiable. This is a significant weakness in the Report’s findings.

Secondly, “operators of websites” is a very broad term, which could certainly be considered to cover social media platform operators such as Facebook, Twitter, Google and Tik Tok, all of whom technically operate websites. The Report, however, refers on several occasions to “social media platforms” and “operators of websites” as being separate types of entities. It appears that the Report’s intention, therefore, is to cover only websites whose content is generated primarily by the website operator, but also allow users to post comments, the most obvious example being news websites. The Report’s position on social media platforms under s.27, therefore, as well as what the extension of the defence to “operators of websites” would mean in practical terms, remains frustratingly unclear.

c) The availability of injunctions under section 33

The Report recommends an amendment to the manner in which an applicant can apply to have defamatory material removed from the internet, either at the conclusion of proceedings or at an interlocutory stage. While such relief is already available under section 33 of the 2019 Act, the intention here appears to be to ease the burden on the applicant who is seeking to obtain such an order. 

This is certainly a welcome development, although it is unclear exactly what the Report envisages in suggesting a “faster mechanism” to deal with applications to take down material. The problem with s.33 is not that it is slow – the problem is that the requirement to show that the Defendant has no defence that is reasonably likely to succeed, in an application against an internet intermediary, is a hugely problematic one.

The central difficulty with this section in the age of internet publication is the interpretation of the word “Defendant” in proceedings which involve an internet intermediary, such as a social media platform. The Act substantially predates the existence of such platforms, and appears to envisage such an application being made directly against the person or entity which bears primary legal liability for the statement. In those circumstances, the question to be considered by the Court would be whether such party has any defence in respect of a statement, the traditional defences open to it being those of truth, qualified privilege or honest opinion.

With much of the defamatory content placed online now being hosted by third-party platforms such as Twitter and Facebook, however, the problem arises when an application is made against this platform, rather than the primary publisher, to have material removed. Can Twitter or Facebook be correctly characterised as the “Defendant”, for the purposes of s.33, when they are clearly not the author of the defamatory statement, and have no need to resort to the traditional defences relied upon by those with primary liability in defamation proceedings?

The difficulty with this section was illustrated in Muwema v Facebook [2016] IEHC 519, which considered s.33 in conjunction with the defence of Innocent Publication under s.27. Reading the two sections together, the Court came to the conclusion that because a social media platform such as Facebook would always have the defence of Innocent Publication open to them, and as s.33 required the Applicant to establish that the Defendant had “no defence that is likely to succeed”, then an injunction requiring it to take down material could never be granted against a platform such as Facebook. 

Notwithstanding the fact that the trial judge appears to have misconstrued s.27 in Muwema, it is certainly arguable that to consider the internet intermediary as being the “Defendant” for the purposes of s.33 creates a serious obstacle to any plaintiff seeking to prevent the publication of material in the age of internet publication.

Logic would seem to dictate that the correct test for an application under s.33 is whether the author, editor or publisher (the “Defendant”) has no defence that it likely to succeed, and the test to be applied as to whether an internet intermediary should be obliged to take down the material should be a separate, less onerous one for the Applicant to meet. If this is, indeed, what the Report recommends, then it is to be welcomed. Unfortunately, that is not made clear.

d) The Norwich Pharmacal procedure

Perhaps the single most welcome recommendation of the Report in respect of online defamation is that a statutory power be created to grant Norwich Pharmacal relief, and that this be extended to the Circuit Court. A long-standing problem with anonymous online defamatory material has been the difficulty, and expense, involved in simply trying to ascertain the identity of the author. The Norwich Pharmacal procedure has previously been available only pursuant to the inherent jurisdiction of the High Court. This will hopefully be cured following the review of the 2009 Act. 

Even this apparently straightforward recommendation, however, is not without its difficulty. The Report describes a Norwich Pharmacal Order as being one “directing an online services provider to disclose the identity of an anonymous poster of defamatory material.” This is simply incorrect. In reality, the Order directs the provider to provide such information as they possess about that person’s identity, which unfortunately is often a very different matter. With no requirement for a user to verify their identity when signing up to a platform, the information which the platform possesses is often little more than a generic email address, which is of little use in identifying the user. This issue was specifically referred to in the High Court case of Parcel Connect t/a Fastway Couriers & Anor v Twitter.[9]

More worrying is the fact that, aside from the issue of the Norwich Pharmacal procedure being available in the Circuit Court, the Report does little to address the problem of online defamation by anonymous users. This is discussed below.

e) What the Report fails to address:

1) Defamatory publications by anonymous online users

The difficulties posed by the publication of defamatory material by anonymous users of social media platforms is discussed separately here. Suffice it to say that the publication of unlawful material by such users is a growing problem, which is exacerbated by the almost blanket reluctance of social media platforms to voluntarily reveal the identity of the authors of such material when requested to do so. Instead, the victim is obliged to apply for a Court order via the Norwich Pharmacal procedure, to compel the platform to release such material as they possess in respect of the anonymous user. Aside from the recommendation that such an Order by available in the Circuit Court, the other difficulties with this particular procedure which are separately discussed here remain.

Despite various oblique references to anonymity, such as “Efforts to identify persons behind non transparent social media content are not always straightforward, and are complicated by user anonymity,” no specific recommendations of any sort are made in respect of how to deal with online anonymity, such as compelling social media platforms to have verifiable information about their users’ identity. This is most disappointing.

2) The limitation period under section 38

While the limitation period for defamation generally was considered, there was no discussion on the anomaly which arises in respect of online publication. Under section 38 of the 2009 Act, the date of accrual of the cause of action is “the date upon which the defamatory statement is first published and, where the statement is published through the medium of the internet, the date on which it is first capable of being viewed or listened to through that medium.”

This means that if published otherwise than on the internet, the cause of action accrues when the statement is first viewed, which is consistent with the traditional concept of when publication occurs. If the statement is published on the internet, however, the cause of action accrues when it is first capable of being viewed, which equates to when it is first uploaded. With settled law holding that the mere availability of material on the internet does not give rise to proof of publication,[10] it therefore appears that the cause of action for defamation via the internet accrues before publication takes place. This creates a clear inconsistency based on the manner of publication. While it would appear to be easy to resolve, the Report makes no reference to this anomaly, and no recommendations for any amendment. 

Conclusion

While the Report of the Review of the Defamation Act 2009 is long overdue, it is clearly something that should be welcomed. It is in general terms very comprehensive, and does at least devote 10% of its c. 300 pages to the issue of Online Defamation.

Frustratingly, however, much of the commentary about online defamation is cursory, superficial and, in some cases, ill-informed. Given the fact that it is being published in 2022, it is hugely disappointing that so much of the material cited is, in internet terms, relatively old, and in some cases plainly out of date. The cumulative effect of the issues considered above unfortunately creates the impression of a Review Board short on the internet law expertise required to deal with its complex and rapidly-evolving nature.

It does little to instil confidence in the hope that this specific area of law, which is only going to grow in importance over the coming years, will be adequately catered for when the revised legislation is ultimately enacted.

Ends.


[1] Available at https://www.gov.ie/en/publication/4478f-report-of-the-review-of-the-defamation-act-2009/

[2] An exception to the last of these might be provided for in respect of cases of “transient publication” (eg being stopped by a security guard in a retail outlet).

[3] Delfi AS v Estonia (Application no. 64569/09).

[4] C-291/13 Papasavvas v O Fileleftheros Dimosia Etairia.

[5] One of the first points made by the Report about online publication relates to the difficulty in allocating liability for online publication to a particular party. In support of this statement, it cites only a 10-year-old decision of the Canadian Supreme Court. Again, more relevant case law was available.

[6] See Page 266 of the Report.

[7] In the third quarter of 2021 Google generated advertising revenue (primary from its search engine and YouTube) of $53 billion, while Meta (primary through Facebook and Instagram) recorded advertising revenues of $29 billion.

[8] It should be noted that the Australian Bill is aimed specifically at social media platforms, and will not apply to operators of other websites who generate most of their own content, and simply allow users to post comments as an additional feature of the website.

[9] In Parcel Connect  v Twitter [2020] IEHC 279, the respondent platform’s position was that “it has nothing to say as to what the information should be and does not warrant that such information as it has will be sufficient to allow the plaintiffs to establish the true identity of the owner and operator of the account,” at para 20.

[10] See, inter alia, CSI Manufacturing Ltd v Dunn and Bradstreet [2013] IEHC 547 and Ryanair Limited v Peter John Fleming [2016] IECA 265.


Leave a Reply

Your email address will not be published. Required fields are marked *