On June 23, 2021, the Australian Parliament passed the Online Safety Bill 2021 (Cth). The bill was introduced on February 24, 2021, to address the issue of cyberabuse and cyberbullying against Australian adults and to establish an enforcement mechanism through the eSafety Commissioner. The Parliament also passed a complementary bill, the Online Safety (Transitional Provisions and Consequential Amendments) Bill 2021 (Cth). With its passage, the Online Safety Bill 2021 became the Online Safety Act 2021 (Cth), which will come fully into force on January 23, 2022.
The bill establishes a regulatory framework that builds on the existing Enhancing Online Safety Act 2015 (Cth), which only addressed cyberbullying against minors rather than adults, and additionally contains provisions on the nonconsensual sharing of intimate images, among other matters.
The purpose of the bill is to strengthen the existing legislation by providing the eSafety Commissioner with new powers to address online harms. The bill establishes a “cyberabuse take-down scheme for Australian adults,” which will operate via a complaints mechanism to the eSafety Commissioner and hold service providers accountable for failing to take down content. Notably, as stated in the explanatory memorandum for the bill, the bill seeks to provide a “more flexible framework” to “accommodate new online harms as they emerge.”
The bill also brings ancillary service providers, such as search engines and app stores, into the regulatory framework; creates a positive obligation on technology firms and digital platforms to implement safety standards; provides the eSafety Commissioner with the power to direct internet service providers to block terrorist or extreme violent material; and establishes civil penalties for breaches of relevant provisions.
The underlying policy objective of the bill is to address harm experienced by Australians online, with the explanatory memorandum excerpting a number of reports on harm experienced by minority groups in particular. The explanatory memorandum also noted the increase in internet usage to maintain social and economic connections during the COVID-19 pandemic.
A report by the Australia Communications and Media Authority in 2019 found that 90 percent of Australians had access to the internet, with 63% of Australian adults using social networking to communicate in the previous six months (this increased to 99% and 72%, respectively, in ACMA’s 2020 report). According to the Australia Institute, 39% of adult Australian users experienced online harassment and reputational damage. Amnesty International found that three in ten women experienced online abuse, and a report by the eSafety Commissioner found that Aboriginal and Torres Strait Islanders, as well as those who identify as LGBTQI, “are more than twice as likely to experience online hate speech.”
The bill also seeks to address terrorist and extreme violent material online. The explanatory memorandum refers to the Christchurch terrorist attacks and emphasizes the gap in the regulatory framework at that time. While internet service providers complied with the government’s request to block footage of the attacks, the previous regulatory framework did not provide the government with the authority to order such actions.
Key Changes in the Bill
- Provides the eSafety Commissioner with the power to issue link deletion and app removal notices to service providers and app providers, respectively. Once a notice is issued, material must be removed within 24 hours unless a longer period is specified. Failure to comply with the notice will result in a civil penalty (a fine of up to AU$550,000 (about US$404,430) for a platform and AU$111,000 (about US$81,620) for an individual).
- Provides the eSafety Commissioner with the power to issue a summons or notice to produce documents in order to investigate claims. (Bill cl 229.)
- Brings providers of app distribution services and internet search engine services into the existing Online Content Scheme, which enables the eSafety Commissioner to take action against seriously harmful online content.
- Requires internet service providers to disable access to material that promotes or incites “abhorrent violent conduct” (Bill cl 9), such as the 2019 terrorist attacks in Christchurch.
Reactions to the Bill
Online safety advocates in Australia have supported the bill as a deterrent to online abuse, with Australian authorities arguing that the bill is necessary to address abuse directed at women.
The Australian Green Party voted against the bill on the basis that the bill excessively expanded the powers of the eSafety Commissioner and may result in the complaints process being abused by “people opposed to sex work, pornography and sexual health for LGBTQI+ people” such that they “seek to have lawful online adult content removed.”
The Australian government, in the explanatory memorandum, acknowledged the complexity of regulation in the online environment and the need to balance freedom of speech and freedom of information with “the responsibility to protect vulnerable Australians from harm.”
According to Facebook’s submission on the bill, Facebook broadly supports the bill, noting that “industry, government and the community all have a role to play in working towards online safety,” while highlighting concerns about government intervention in private messaging by adults online. The platform stated that it would continue to develop policy and technology, including through the use of AI, to remove harmful content online.Prepared by Nabila Buhary, Law Library intern, under the supervision of Kelly Buchanan, Chief, Foreign, Comparative, and International Law Division II