How to deal with online anonymity
17 January 2022
The Irish Government has undertaken to publish its review of the 2009 Defamation Act this year, and it would appear that the long overdue reform of the Act is drawing closer. With this in mind, this article will consider a particularly topical issue, that of defamatory material posted online by anonymous internet users.
It is an issue which has come under increased focus following comments about online anonymity in October 2021 by Australian Prime Minister Scott Morrison. In early December, draft legislation that aims to tackle the issue – the Social Media (Anti-Trolling) Bill 2021 – was published by the Australian government.
This article will deal with the problems caused by defamatory content posted by anonymous users, the ambit of the draft legislation in Australia, and the manner in which UK and Irish defamation laws already deal with the issue. It will also consider the potential liability of internet intermediaries – the companies such as internet access providers, search engines and any website that host content generated by users, and enable the person who creates the content to communicate with the user who receives this content – with a particular emphasis on social media platforms.
The issue of online anonymity
Online anonymity is one of the most contentious issues of the internet age. The ability of internet users to post material anonymously has long been considered one of the great advantages of online usage, allowing as it does opinions to be expressly freely without fear of reprisal. This is particularly true of political speech, where the availability of anonymity has encouraged dissident voices to speak out against repressive regimes.[1] It applies likewise to more mundane material, with users being able to speak freely on a range of issues and express opinions that may be unpopular with their family, friends or employers.
The downside of anonymous internet use has, however, come into sharper focus in recent years. Anonymous trolling is now an everyday occurrence, with many users posting content that insults, harasses and defames others in the belief that their anonymity prevents them from being held accountable.
It should be remembered that there is no fundamental legal right to anonymity. Internet users who sign up to social media platforms without giving verified identification, and who post material to such platforms with their true identity disguised, do so entirely at the discretion of the platform itself. Put simply, users can post material anonymously on Facebook, Instagram, Twitter, YouTube and Tik Tok because those social media platforms allow them do so.
It is clear that this encourages some users to post unlawful material in the belief that their anonymity will provide them with a shield from liability. This situation is exacerbated by the fact that, when requested to reveal the identity of the author of unlawful material posted anonymously on their platform, social media services routinely decline to do so, citing issues pertaining to data protection and privacy. This necessitates the victim obtaining a court order via the Norwich Pharmacal procedure[2] to compel the platform to release such material as they possess in respect of the anonymous user. The numerous difficulties with this particular procedure are considered here.
The defences afforded to internet intermediaries:
When the author of a defamatory statement cannot be easily identified, the victim may instead consider attempting to impose liability on the internet platform that is hosting this material. The attractions of seeking to hold this internet intermediary liable are self-evident – they are clearly identifiable, are within the jurisdiction of the Irish courts, and are certainly a mark for any damages.
In order to shield these intermediaries from a potentially vast number of legal claims, they have been provided with robust defences by both the E-Commerce Directive[3] and the defence of Innocent Publication under s.27 of the Defamation Act 2009.[4]
Under the E-Commerce Directive, intermediaries that host content generated by their users – the social media giants being the most obvious example – will not be held liable for such content unless
(a) they have been put on notice of its existence, and
(b) they have failed to act “expeditiously” in dealing with it.
Under s.27 of the 2009 Act, parties who are not the “author, editor or publisher” of the material complained of – for example, a social media platform – will be entitled to avail of the defence of Innocent Publication, and avoid liability subject to them having taken reasonable care with the publication, and not being on notice as to any potentially defamatory content within it.
This defence is mirrored in Australia by that of Innocent Dissemination under s.32 of the Defamation Act 2005, and in the UK by the defence afforded to internet intermediaries under s.1 of the Defamation Act 1996. The UK’s Defamation Act 2013 goes even further, with s.10 providing a jurisdictional bar against proceedings being instituted against a party that is not the “author, editor or publisher” of the statement complained of unless it is “not reasonably practicable for an action to be brought against the author, editor or publisher.”
The difficulties this creates for the victim of anonymous online defamation:
The factors listed above create a situation which is clearly unsatisfactory from the complainant’s point of view. The combination of
a) the unwillingness of social media platforms to voluntarily disclose the identity of its users;
b) the expense involved in obtaining Norwich Pharmacal order compelling them to do so;
c) the fact that the information the platforms possess is often inadequate to reveal their users’ true identity; and
d) the range of defences available to the social media platforms
would appear to often leave them without a defendant against whom to obtain a remedy.
The attempts being made in Australia to address this situation with legislation that proposes to oblige social media platforms to co-operate more meaningfully with complainants, or risk losing the defences from liability which have been traditionally open to them, is now considered.
The Australian initiative
In November 2021, Prime Minister Scott Morrison articulated the frustration of many victims of anonymous online trolling when he declared that social media has become a “coward’s palace … … where (anonymous) people can say the most foul and offensive things to people, and do so with impunity.” He suggested that companies that allow online users to operate anonymously should no longer be considered as platforms, but instead considered as primary publishers of such material.
This was viewed a radical proposal, as the defence of Innocent Dissemination had long been considered to immunise platforms such as Facebook and Twitter from liability for any content posted by its users, so long as they dealt expeditiously with unlawful material once put on notice of its existence. Under this provision, social media platforms were essentially considered to be “secondary publishers”, who would avoid liability subject to criteria provided for by legislation. The most controversial aspect of the Australian PM’s proposal, however, was that no such defence would be available to these platforms in respect of material posted anonymously, and the platforms would be treated as the primary publisher as though they had authored the content themselves.
The draft Social Media (Anti-Trolling) Bill 2021, which followed almost immediately, is an attempt to codify some of the Prime Minister’s suggestions. The Bill proposes to make social media platforms liable for comments posted by users of social media, as well as comments posted by third parties on that user’s page, by considering the platforms to be the publishers of such comments. It further proposes to remove the defence of innocent dissemination which had previously been available to them.[5]
This apparently onerous provision is tempered by a new defence which is available to social media platforms under s.15 of the Bill. This defence requires the platform to have a “Complaints Scheme” in place, and to follow that Scheme upon receiving a complaint from an injured party. The main points of this scheme are as follows:
• Upon receipt of a notice from a complainant alleging that they have been defamed, the social media platform must contact the author of the comment within 72 hours to inform them of the complaint;
• If the person who posted the comment consents, the comment will be removed by the social media service;
• If the person who posted the comment does not consent to it being removed, the social media service must provide the complainant with the author’s name, address and email address, subject to the author consenting to such information being disclosed;
• If the author does not consent to their name and address being disclosed, the complainant can apply to the court for an “End-user information disclosure order”, obliging the social media service to give the complainant contact details for the author of the defamatory comment. (It is unclear how this disclosure order differs from similar relief already available to complainants, which appears to mirror the Norwich Pharmacal procedure utilised in this jurisdiction.)
It has long been an issue with the users of social media platforms that they can set up an account with just an email address, which may or may not identify them. The result of this is that even if the complainant compels the platform to identify their user, having obtained a Norwich Pharmacal Order, the complainant is often not provided with the identity of the author of the material so as to be able to institute proceedings against them. A central pillar to the proposed Complaints Scheme, however, is that in order for social media platforms to avoid liability, they must have up-to-date contact information for all their users – to include the correct name, address and email address – regardless of whether they voluntarily disclose this information to the complainant, or whether they do so pursuant to a court order.
For this reason, the Bill may be seen not necessarily as a direct attempt to fix social media platforms with liability for any defamatory material which they host, but rather as an incentive for them to both follow a swift and effective complaints mechanism, and allow for the easier identification of authors against whom complainants can institute proceedings directly. If the social media platform does not have contact information for the author of the defamatory comment, then it would appear that they will have no defence if a claim is made directly against them. Clearly, therefore, the Bill creates an incentive for the platforms to possess such information, which should result in a situation whereby a complainant always has an identifiable defendant against whom proceedings can be instituted. That defendant will be either the original author or, if they cannot be identified by the social media platform, then the platform itself.
The Anti-Trolling Bill, as drafted, is not without its limitations. Firstly, the name of the Bill itself is misleading, in that it confines itself solely to dealing with defamatory material. The term “trolling”, on the other hand, has come to be used as a catch-all term for various types of online abuse, to include harassment, revenge porn and hate speech.
Secondly, the Bill further restricts itself to the potential liability of “social media services”, described as an electronic service whose “sole or primary purpose of the service is to enable online social interaction between two or more users.” This definition therefore excludes, for example, news organisations who post their own content, but also allow users to post comments at the end of articles, and instead appears to be aimed directly at social media platforms such as Facebook and Twitter.
Thirdly, and perhaps most significantly, as part of the Complaints Scheme the social media platform is not obliged to disclose contact information in respect of the author of the defamatory statement unless the author gives their consent. It appears highly unlikely that the majority of anonymous users, accused of defaming a third party, will consent to their identity being revealed so that proceedings may be instituted against them. This means that, in reality, the complainant will more often than not be required to apply to the court to obtain such information. In the absence of clear guidance as to how the costs of such an application will be awarded, many less well-resourced complainants may be discouraged from bringing such an application.
UK Defamation Act 2013
Section 5 of the UK Act already provides specifically for defamation actions brought against “operators of websites”, which would include the main social media platforms, and broadly echoes the proposed Australian legislation. It provides a defence for such platforms where the complainant was not able to identify the person who posted the offensive comment, and therefore seeks to make the platform liable instead. In those circumstances, the platform will be able to avail of the defence afforded by s.5 if it complies with regulations mandating the manner in which it deals with a complaint, which are broadly similar to those proposed in the Australian draft legislation.
In the UK, the 48-hour period which the platform has to deal with the complaint is even more truncated than the 72-hour Australian equivalent. The most significant difference between the defence provide for by s.5 and that proposed by the Australian Anti-Trolling Bill, however, is that the s.5 defence does not replace the alternative defences open to the platform. This means that it can still rely on the s.1 defence of the Defamation Act 1996, which is similar to the Innocent Publication defence in the this jurisdiction. The reality, therefore, appears to be that the s.5 defence is seldom used, in circumstances where the less onerous defences under sections 1 or 10 are also available to the internet intermediary.[6]
Defamation Act 2009
There is no provision in domestic legislation equivalent to s.5 of the UK’s 2013 Act, or the proposed Anti-Trolling legislation in Australia. Instead, social media platforms operate their own, self-generated notice and takedown procedures without legislative oversight,[7] and the speed with which they respond to complaints about defamatory material is dictated by the platforms themselves.
If such material is posted by anonymous users, the platforms routinely decline to divulge any information they have concerning the identity of these users, and complainants are consequently obliged to seek a Norwich Pharmacal Order to obtain such information. Furthermore, as there is no incentive on platforms to possess a verified name and address of the anonymous authors of defamatory material, this Order frequently turns out to be of little evidential value.
In this regard, at least, Irish legislation appears to be lagging behind its UK and Australian counterparts. With the growing problem of unlawful material being posted by anonymous users of social media, the review of the 2009 Act provides an opportunity to address this issue.
Conclusion
The proposed Online Safety and Media Regulation bill (discussed here) proposes to compel social media platforms to have robust complaints procedures in place, and act expeditiously to take down unlawful material. The draft Bill, however, appears to specifically exclude defamatory material, and instead deals only with material that constitutes harassment, hate speech and other types of harmful content. Nor does there appear to be any specific provisions relating to anonymity on the internet in the Bill as drafted.
For this reason, and in the absence of a more accessible Norwich Pharmacal-type application, a solution would be to amend the Defamation Act 2009 Act so as to make it easier for victims of defamation to obtain the identity of would-be defendants from the social media platforms which allow them to post content anonymously. Mandating that platforms have verified information as to their users’ identity would appear to be a reasonable first step, along with incentivising the platforms to either provide the complainant with this information, or take down the offending material within a matter of days.
The continued availability of the Innocent Publication defence in the UK means that the importance of adhering to the complaints mechanism provided for by s. of the 2013 Act appears to have been reduced. With this in mind, it is perhaps more sensible to follow the lead of the Australian legislature, and make this defence unavailable to social media platforms when the defamatory material in question has been published anonymously. This will, at the very least, compel social media platforms to ensure that they have sufficient information about their users so that, upon receipt of a Norwich Pharmacal order, the complainant will be provided with an identifiable defendant against whom to instigate proceedings.
Without simultaneous action as regards the Norwich Pharmacal procedure, however, it would appear that such moves would fail to result in a complainant being in a more advantageous position against an anonymous author of defamatory content than is currently the case. The likelihood that the author will not consent to their name and address being provided to the complainant means that this procedure will almost certainly need to be followed. If a social media platform was obliged to indemnify the complainant against the costs of bringing a successful Norwich Pharmacal application, this would clearly mitigate much of the difficulty of having to seek such an Order.
Alternatively, if the necessity of the author’s consent to have their identity revealed was removed, this would allow for proceedings to be instituted directly against the author, effectively removing the social media platform from any proceedings. The issue with this would be the question of the degree to which the complainant must satisfy the platform that the content complained of was defamatory, thus requiring them to disclose the identity of a user without the latter’s consent. The need for platforms themselves to perhaps adjudicate as to the likelihood that a statement is defamatory is clearly problematic, and in erring on the side of safety, their actions may have an unwanted chilling effect on the right to freedom of expression. Alternatively, the idea of a court making a preliminary decision on this issue, which could be presented to the platform, is similarly fraught with procedural difficulties.
So long as social media platforms allow users to operate anonymously, however, the issues considered above will remain.
Ends.
[1] Perhaps the most famous comment on the subject was by the US Supreme Court in McIntyre v Ohio Election Commission 514 US 334 (1995) which held that “Anonymity is a shield from the tyranny of the majority … It thus exemplifies the purpose behind the Bill of Rights, and of the First Amendment in particular to protect unpopular individuals from retaliation — and their ideas from suppression — at the hand of an intolerant society…” The Grand Chamber of the European Court of Human Rights held likewise in Delfi v Estonia (App. No. 64569/09) that “Anonymity has long been a means of avoiding reprisals or unwanted attention. As such, it is capable of promoting the free flow of ideas and information in an important manner, including, notably, on the Internet…”
[2] As formulated by the English case of Norwich Pharmacal v Customs and Excise Commissioners [1974] AC 133.
[3] Articles 12-15 of Directive 2000/31/EC.
[4] The section 27 defence of Innocent Publication.
[5] Under section 32 of the Defamation Act 2005 and section 235 of the Online Safety Act 2021.
[6] https://www.brettwilson.co.uk/blog/defamation-act-2013-a-summary-and-overview-six-years-on/
[7] This is proposed to be amended by the provisions of the Online Safety and Media Regular Bill, discussed here.