The Online Safety and Media Regulation Bill 2022
17 January 2022
The Online Safety and Media Regulation Bill has been in gestation for several years. Its origins can be traced back to the Law Reform Commission’s Report on Harmful Communications and Digital Safety in 2016.[1] A final draft of the General Scheme of the Bill was published in December 2020, and a first draft of the Bill was published last week, on 12 January 2022.
The Bill proposes to amend the Broadcasting Act 2009 (referred to in the Bill as the ‘Principal Act’) rather than replace it. The Bill will create a Media Commission (to be known a Comisiún na Meán) to regulate the activities of broadcasters and internet platforms based in Ireland. It will comprise of a Chairperson and between 3-6 other members, each of whom will be referred to as Commissioners. It is intended to divide the Commission into 3 broad categories – dealing with broadcasters, audio-visual material and online material.
The Media Commission will therefore:
(a) Take over the duties of the former Broadcasting Authority of Ireland. While it will be in charge of issuing broadcasting licenses, it will also be in charge of regulating content of Irish television and radio broadcasters and dealing with viewer and listener complaints;
(b) Transpose the revised Audio-visual Media Services Directive, meaning that video-sharing platforms such as YouTube will now be subject to increased regulation;
(c) Draw up an Online Safety Code which designated services hosting user-generated content will be required to abide by. The part of the Bill is intended to deal with the lack of legislative framework in relation to harmful online content.
The focus of this article will be on the last of these, namely the measures that are proposed primarily to regulate the activities of the large social media platforms – Facebook, Instagram, YouTube, Twitter and TikTok – many of whom have their EU headquarters in this jurisdiction. These are provided for in Part 11 of the Online Safety Bill, which inserts a new Part 8A into the Broadcasting Act 2009.
In so doing, this article will focus on the following specific areas:
1. The role of the Online Safety Commission;
2. The type of harmful content to be regulated;
3. The powers at the Commission’s disposal;
4. How the bill compares to similar legislation elsewhere;
5. The main issues of contention.
The significant aspects of the draft Bill, and how they deal with the recommendations of the Joint Committee on Tourism, Culture, Arts, Sport and Media which were published in November 2021, are considered below.
1. The role of the Onine Safety Commission
The division of the Media Commission that is in charge of the activities of online platforms – the Online Safety Commission – will designate which online service providers will be subject to monitoring by the Commission, the nature of their obligations under the Online Safety Code, and implement the available sanctions should the provisions of the Code be breached.
a) Identification of “designated online services”:
The Commission will designate which online service providers are to be subject to its regulatory and enforcement powers. This may include any online services that host user-generated content, to include social networking services, discussion forums, search engines, personal messaging services and news websites which allow for the posting of user comments.
b) Online Safety Code:
The Commission will draw up an Online Safety Code to govern the standards and practices to be observed by designated online services. This Code will be designed to ensure that the services take measures to minimise the availability of harmful content on their platforms, and provide guidelines as to user complaint/ issue handling mechanisms. The manner in which user complaints are handled by the online services will be monitored by the Commission, and as well as specifying timelines under which complaints must be dealt with, the Commission may also have the power to demand that individual pieces of content are removed.
It should be noted that in relation private communications services and private online storage services, the Commission’s code-making powers will be limited to matters relating to content which it is a criminal offence to disseminate, so they cannot regulate merely “harmful” material that is contained on, for example, personal messaging services such as WhatsApp.
c) Individual complaints will be not be acted upon:
It is important to stress that, as the Bill currently stands, it is a “systemic” piece of legislation. It aims to monitor the operation of online platforms, rather than provide an avenue for individuals to pursue claims against those platforms. The fact that the title of Online Safety Commissioner echoes that of the Data Protection Commissioner can lead to a belief that their roles are analogous. There is, however, a significant difference between them.
The Data Protection Commission allows for individual complaints from members of the public concerning a breach of their data protection rights to be made to its offices, with those complaints being investigated and becoming the subject of a written decision. The Online Safety Commission, as it is currently envisaged, will provide for no such mechanism.
Even if a social media platform breaches a provision of the Online Safety Code – by, for example, failing to remove a piece of content within the prescribed time – and such breach affects the rights of a member of the public, the Commission will not investigate individual breaches brought to their attention by the public. Instead, the data pertaining to individual breaches will be used generally to assess the manner in which the social media platforms are complying with their obligations under the code of practice drawn up by the Commissioner.
This is clearly a policy decision based on the sheer volume of complaints that the Commission would have to deal with, and the fact that it could not be adequately staffed to do so. It is a decision however, as discussed below, that is subject to review.
2. What is harmful content?
Under Section 44 of the Bill (which will be inserted in section 139 of the 2009 Act) “harmful online content” includes two main categories (it is a non-exhaustive list) –
(a) Material that it is a criminal offence to disseminate under existing legislation listed in Schedule 3 of the Bill. This includes:
• Child sexual abuse material;
• Content containing or comprising incitement to violence or hatred;
• Public provocation to commit a terrorist offence;
• The online publication of material which identifies a party to proceedings under the Criminal Law (Rape) Act 1981 or the Children Act 2001;
• Online threats to kill, or online harassment, contrary to the 1997 Act;[2]
• Offences under sections 2, 3 or 4 of the 2020 Act[3] (see discussion on this Act here)
(b) online content that is likely to have the effect of bullying or humiliating a person, to encourage or promote eating disorders, or to encourage or promote self-harm or suicide, so long as such online content either gives rise to a risk to a person’s life, or significant harm to their physical or mental health.
3. The Powers of the Commissioner
The Online Safety Commissioner will have the power to audit designated online services on a regular basis, under a mechanism to be known as a “Information Notice”, to verify the degree to which they are conforming with their obligations under the Online Safety Code. The fact that members of the public cannot make individual complaints would appear to cause an issue, but members can bring issues to the attention of the Commission, which will be fed into reports that are generated in relation to the online services’ compliance with the Code. The Commissioner can also receive complaints of systematic issues from “nominated bodies”, such as NGOs, charities which work in relevant areas such as suicide prevention, protection of minorities etc.
The Commissioner has the power to:
1. Conduct investigations and inquiries,
2. Issue notices and warnings,
3. Impose administrative financial sanctions, subject to circuit court confirmation, and the power to enter into settlement arrangements,
4. Prosecute summary offences.
In the event of non-compliance with the Online Safety Code by a regulated entity, the Commissioner can apply for any of the following sanctions:
(a) Issue an administrative financial sanction in accordance with the procedure set out in Part 12 of the Bill. This sanction can be up to €20m, or 10% of the entity’s turnover from the previous financial year, and will need to be confirmed by the Circuit Court;
(b) Seek leave of the High Court to compel internet access providers to block access to a designated online service in the State;
(c) Require a designed online service to either remove or disable access to harmful content.
4. Similar legislation in other jurisdictions
Ireland is but one of several jurisdiction considering similar legislation. Some, indeed, such as Australia, have already introduced analogous legislation.
EU: Currently working its way through the European Parliament is the Digital Services Act, the long-awaited amendment to the E-Commerce Directive. Now 22 years old, the Directive remains the primary piece of EU legislation which governs the liability of internet intermediaries, be they “mere conduits” (primarily internet connection providers), “caching” services (primarily search engines), or “hosts” (platforms which host user-generated content, such as social media services). The Digital Services Act, whose first draft was published in December 2020, retains this classification, but creates a new category of “online platform”, essentially a sub-section of the “hosts” category, but whose primary purpose is that it stores and disseminates to the public information that is provided by its users.
The Act proposes to impose more stringent requirements in respect of notice and takedown procedures of hosting services, including a requirement under Article 15 to provide a written explanation of a platform’s reasons to disable access to or remove a piece of content, and a mechanism under Article 17 for users to challenge a decision a platform makes in respect of removing, or refusing to remove, a particular piece of content, or disabling the user’s account. For the first time, the Act explicitly provides a “good Samaritan” exception for internet intermediaries, allowing them to monitor content being uploaded to their platforms without losing any of the protections provided for by Articles 3, 4 and 5 (the equivalent of Articles 12, 13 and 14 of the E-Commerce Directive). The Act also proposes to place enhanced responsibilities on “large online platforms” (platforms which have over 45m regular users) in respect of the manner in which they deal with illegal and harmful material, with an obligation being imposed to conduct annual assessments of risks arising from the use of their services.
It is envisaged that the Digital Services Act will not come into effect before 2023, but any provisions contained in the Online Safety and Media Regulation Bill will obviously need to be consistent with the Digital Services Act, which will be directly applicable in this jurisdiction.
United Kingdom: It is worthy of note that the UK is currently working on a similar piece of legislation. The draft Online Safety Bill, published in May 2021, also aims to regulate the manner in which online platforms deal with harmful user-generated content. The Bill establishes a “duty of care” for designated online providers to take reasonable steps to prevent, reduce or mitigate harm occurring on their service. Two categories of online service providers are provided for:
Category 1: A “user to user” service – ie. a service which hosts user-generated content, to include the giant internet platforms such as Facebook and Twitter, but specifically excluding news publishers. These companies will be supervised in relation to how they deal with both illegal content, such as child exploitation and terrorism content, similar to the Irish Online Safety Bill. They will also be supervised in respect of content that, while possibly legal, is harmful, such as content relating to self-harm, suicide and eating disorders. It is envisaged that disinformation may also fall within this category of harmful material.
Category 2: Companies that provide search engine services, which will only be supervised in respect of illegal material.
Harmful material is defined as being content in respect of which “there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on a (child or adult) of ordinary sensibilities.”[4] The Bill will also not deal with material which is defamatory, breaches data protection rights, or causes financial harm. The dissemination of news by recognised organisations and journalists is given special protection, with Category 1 providers being obliged to have systems in place which protect the freedom of expression of “recognised news publishers” and journalists, the latter to include “citizen journalists.” Similarly to the Online Safety Bill in this jurisdiction, the regulator (Ofcom) will have the power to impose fines of up to £18m or 10% of annual global turnover on a platform.
Australia: Australian has already enacted analogous legislation – the Online Safety Act 2021. Under the Act social media services are expected to follow “Basic online safety expectations”, similar to the proposed Online Safety Code in this jurisdiction. The type of material regulated under these expectations includes cyber abuse and bullying material aimed at children and adults, non-consensual sharing of intimate images, age-inappropriate material (ensuring that children – persons under the age of 18 – don’t have access to material only suitable for adults), and material that includes “abhorrent violent conduct” (defined as terrorism, murder, rape, torture or kidnapping)
In respect of specific times scales under the 2021 Act, if a social media service or user has previously been asked to down town cyber-bullying material aimed at a child or adult, or intimate images that had been shared without consent, and they had failed to do so within 48 hours, then the eSafety Commissioner may order that the material be removed from the platform by the user, or by a social media service, within 24 hours.
5. The main issues of contention
Systemic complaints handling system:
The Joint Committee on Tourism, Culture, Arts, Sport and Media published its report into the Scheme of the Online Safety and Media Regulation Bill in November 2021, and this issue appeared to be a major source of contention. Many parties who were consulted by the Committee submitted that the absence of any power for the Commissioner to deal with complaints from individual users was a significant weakness in the Bill.
Unsurprisingly, both Facebook and Twitter opposed the provision of an individual complaints system, suggesting that it would be over-burdensome on their platforms given the volume of complaints they may have to deal with. Organisations such as the Data Protection Commission, the Ombudsman for Children’s Office and the Rape Crisis Network, however, lobbied for the inclusion of an individual complaints mechanism, taking the form of a “safety-net” when platforms have failed to deal satisfactorily with the individual complaint at first instance. The fact that the eSafety Commissioner in Australia is tasked with dealing with individual complaints was relied upon in this regard, pointing out that her office received approximately 350 complaints per week from members of the public in 2019, despite Australia having a population five times the size of Ireland. The Committee’s report concluded by recommending that an individual complaints mechanism should be included as part of the legislation.
Speaking on 12 January 2022, the Minister for Tourism, Culture, Arts and Media Catherine Martin averred to the possibility that an individual complaints mechanism may yet form part of the final Bill. She pointed out, however, the difficulty in handling complaints in respect of online platforms that have their EU headquarters in Ireland, as this may involve dealing with complaints from a potential audience of 450m EU citizens. [5]
What constitutes harmful online content:
As the Bill currently stands, the Commission will not be responsible for material that is defamatory, that violates data protection or privacy law, that violates consumer protection law or that violates copyright law. It is considered that these areas of law are already adequately regulated by existing legislation, although the new s.139(b) does allow for further categories of online content to be added to the definition of what constitutes harmful content.
The Joint Committee recommended that the current, relatively narrow definition be extended to include the content described above, which is not specifically provided for in the draft Bill. It further suggests expanding the definition to include misinformation, and content which results in financial harm. The comments of the Minister for Tourism, Culture, Arts and Media on 12 January, however, did not suggest that this issue was under consideration.[6]
Harm against children:
The Joint Committee recommended that the Bill be amended so as to indicate a minimum age for a child to be permitted to create an account with designated online services.
Interestingly, the UK’s Online Safety Bill proposes to put a heightened obligation on platforms (both Category 1 and 2) which are likely to be used by children, which suggests that the large online platforms will be required to have age verification measures in place in relation to certain types of material. This can be contrasted with the Online Safety Bill in this jurisdiction, which it appears will not make such a requirement of online platforms, and may instead leave such measures under the auspices of the Data Protection Commission.
Anonymity:
There is no reference in the Bill to any provisions dealing with online anonymity. This has become an increasingly contentious issue, and has recently been the subject of draft legislation in Australia. In circumstances where anonymity is provided for entirely at the discretion of platforms which host user-generated content, it is perhaps surprising that there appears to be no legislative appetite for imposing any regulations on such platforms, particularly the social media giants, in respect of the manner in which they allow users to both sign up, and operate, without revealing their true identity. For a more in-depth discussion of this particular issue, see here.
6. What next?
The Government has pledged to set up an expert group to report back within 90 days, with recommendations for how to best address the matter of an individual complaints mechanism. It will, in the meantime, begin staffing the new Commission, including the appointment of an Online Safety Commissioner. The Minister has stated that she hopes to put the bill before the Oireachtas for enactment prior to this year’s summer recess.
[1] At para 3.66, the LRC recommended that “As matters currently stand, while it would appear that the non-statutory self-regulation by social media companies, through their content and conduct policies, has improved in recent years, this may not be sufficient to address harmful communications effectively … The Commission … therefore recommends that an office be established on statutory basis with dual roles in promoting digital and online safety and overseeing an efficient and effective take down procedure in relation to harmful digital communications.”
[2] The Non-Fatal Offences Against the Person Act 1997.
[3] The Harassment, Harmful Communications and Related Offences Act 2020.
[4] Draft Online Safety Bill, Sections 45 and 46.
[5] https://www.rte.ie/news/politics/2022/0112/1273149-online-safety/
[6] https://www.rte.ie/news/politics/2022/0112/1273149-online-safety/