15 June 2022

Legislation UPDATE: Online Harms Bill - What does this mean for businesses?

news
regulatory updates

just what is the Online Harms Bill and what does it mean for businesses?



In the second of our series of articles on the Online Safety Bill, we set out what you need to know if you are a company that is potentially going to be caught by its remit.  

what stage is the Online Safety Bill at? 

On the 17th March 2022, the UK government presented a new iteration of the Online Safety Bill to the House of Commons. This draft has been several years in the making, following an Online Harms White Paper in 2019 and a first draft of the Online Safety Bill in May 2021. 

The Bill will now proceed through the House of Commons and the House of Lords in ordinary legislative procedure, with various debates and amendments. Ministers are hoping that the bill will receive royal assent (i.e., become law) by the end of the year. There will then be a transition period for companies and the regulator to prepare for the legislation to be enforced. 

who will it cover? 

The Bill will impose duties on what will be called “regulated services”, which in reality means those user-to-user services which share user-generated content (like FB and YT for example) and search services (like Google), all with links to the UK.  It will cover three types of content, including: (i) illegal content; (ii) content that is harmful to children; and (iii) content that is legal but harmful to adults. 

In addition, it is likely that the regulators will very quickly turn their attention to any service which provides content that is (or is likely to be) accessed by children – looking first to those with the largest audience and with a range of already known high-risk features (these are known as category 1 services). 

what’s new? 

A number of changes to the last version of the Bill were trailed in the past month, prior to publication, as the government is looking to gain maximum publicity. These included: 

  • Requiring Category 1 companies to offer user verification and ways for user to controlwho can interact with them;
  • Ensuring Category 1 companies make tools available for adult users to choose if they want to be exposed to ‘legal but harmful’ content;
  • Announcing extra priority illegal offences on the face of the Bill including revenge porn, hate crime, fraud, the sale of illegal drugs or weapons, promotion/facilitation ofsuicide, people smuggling and sexual exploitation;
  • Bringing paid-for scam adverts into scope to combat online fraud;
  • Criminalising cyberflashing, as per a proposal from the Law Commission; and
  • Requiring all sites that publish or host pornography to check that users are 18 years or older.

The new and most noteworthy developments in this iteration include: 

  • Clarity on what ‘legal but harmful’ content companies are expected to tackle, as thiswill be set by government and approved by parliament, rather than being identified assuch by companies themselves;
  • A false communications offence, meaning that those who knowingly convey informationonline that is false can be fined or imprisoned;
  • Executives of companies who fail to cooperate with Ofcom’s information requests couldface prosecution or jail time within two months of the Bill becoming law, rather thantwo years as previously drafted;
  • Companies’ senior managers would become criminally liable for destroying evidence,failing to attend or providing false information in interviews with Ofcom, and obstructingthe regulator when in company offices; and
  • Through regulation, the DCMS Secretary of State will be granted the power to alter the future scope of the Bill to bring elements such as one-to-one live aural communications into the jurisdiction of the Bill.

further analysis. 

The key change to the bill is a clarification of platforms’ obligations with regard to ‘legal but harmful’ content. The Bill removed the requirement for Category 1 services to address harmful content accessed by adults, beyond those priority harms designated by parliament, which will be set out in secondary legislation. This provides clarity about the harms that services must address and is explicitly done with the intention of avoiding platforms taking an overly broad approach. It also indirectly responds to concerns from the industry that there was insufficient clarity in the obligation to assess what content on their platform might be harmful and that this required a series of subjective tests by moderators about what might be harmful to an individual of a particular sensibility. 

In other areas, however, it is clear that Dorries has sought to increase the regulatory burden on the largest platforms, rather than provide legal clarity. Specifically, the new obligations around user verification, and tools to allow users to tailor their exposure to legal but harmful, ones are likely to be onerous. And while the precise detail will be worked through by Ofcom, these measures go significantly further than the ‘proportionate’, ‘risk-based’ approach to regulation that was originally proposed. In addition, Dorries has chosen a maximalist approach to sanction platforms that fail to meet their obligations under Bill; the offence for senior managers who do not meet their obligations will come in as soon as the Bill is passed rather than after a period of two years as previously envisaged.  

This likely reflects her desire to position herself as tougher on tech companies than her predecessor, rather than a considered approach to timelines for implementation. It also confirms her appetite for early action once the Bill is passed. 

key updates that are new in this version of the Bill:  

Duties to empower adult users 

The Bill sets out duties to empower adult users on Category 1 services, to allow them to increase their control over harmful content in one of two ways: 

  • Reducing the likelihood of encountering priority content that is harmful; or
  • Alerting the user to the harmful nature of the priority content.

A separate duty is that which would allow adult users to filter out non-verified users – both in terms of being exposed to their content, and allowing non-verified users to access theirs. 

User identity verification.   

As per the above, a provider of a Category 1 service must offer all adult users the option to verify their identity.  The verification process may be of any kind, “and in particular, it need not require documentation to be provided”.  Ofcom will produce guidance on this, and it will reflect the need to ensure availability to vulnerable adult users. In producing the guidance, Ofcom must consult the Information Commissioner, those with technological expertise, and those who represent the interests of vulnerable adult users.  

Children’s access assessments and age assurance 

The Bill makes numerous references to ‘age verification, or another means of age assurance’ in a way that the previous draft did not explicitly.  Children’s access assessments must be carried out not more than one year apart. 

A service will be deemed ‘likely to be accessed by children if it is possible to be accessed (as per the previous draft) but also a) where the provider of the service fails to carry out the first assessment as required, and b) where Ofcom determines as such after an investigation. 

Measures to be used to comply 

For both adults (with illegal content) and children (with harmful content), the Bill requires providers to use the following measures, amongst others ‘if it is appropriate to do so’: design of functionalities and algorithms, policies on user access to the service, user support measures, and staff policies and practices.   

Duties around illegal content 

The previous draft required companies to ‘minimise’ the presence of priority illegal content, whereas the new draft requires companies to prevent individuals from encountering such content. 

Proactive technologies 

A duty to include provisions in the terms of service, giving information about any proactive technology used, including the type of technology and how it works.  A duty to facilitate complaints by a user whose content is taken down or given a lower priority due to the use of proactive technology, or complaints by those who feel proactive technology has been used inappropriately. 

Fraudulent advertising 

New duty for Category 1 services to prevent fraudulent advertising, minimise the length of time for any content being present and swiftly take it down when alerted to its presence. In determining what is proportionate here, the provider should consider a) the nature and severity of potential harm to users and b) the degree of control a provider has in relation to the placement of ads. 

In the case of Category 1 services that offer both search services, these duties do not extend to ads encountered via the search service or anything relating to the design, operation or use of the search engine.  Category 2 services must minimise the risk of encountering such content. 

Fraud offences are defined as per the Financial Services and Markets Act 2000 and the Fraud Act 2006.  Ofcom must issue a code of practice for the purpose of compliance with these duties on fraudulent advertising. 

Harmful communications 

New harmful communication offence that would fine or imprison those who deliberately cause serious distress to a ‘likely audience’ via an electronic message. News publishers, broadcasters and providers of on-demand programmes however will remain exempt from this offence. 

False communications offence 

 Those who knowingly convey information online that is false can be fined or imprisoned. News publishers, broadcasters and providers of on-demand programmes however will remain exempt from this offence. 

Threatening communications offence 

A further offence that could also lead to imprisonment or a fine for individuals who convey a threat of death or serious harm that they intended an individual encountering the message to fear that the threat would be carried out, or the victim feared it would be carried out. 

Reporting CSEA to the NCA 

Providers of user-to-user services and search services will be required to report all detected and unreported CSEA content to the National Crime Agency (NCA).  The Secretary of State will make regulations in connections with these reports, outlining the information to be included, the format, the time frames for sending the reports, the records that providers must keep and anything else appropriate. In the process of devising these regulations, the Secretary of State must consult the NCA and Ofcom. 

A person who provides false information in this area is liable to imprisonment. 

Cyber-flashing  

A new offence for those who send an image or film of any person’s genitals with the intention that the recipient experience alarm, distress or humiliation. 

Pornographic 

Providers of pornographic content have been brought within scope of the Bill have a duty to ensure that children are not normally able to encounter such content, “for example, by using age verification”.  

Ofcom must produce guidance for such providers to assist them in complying with their duties. 

Fees to Ofcom 

Regulated services must notify Ofcom in relation to a charging year, and with that, details of the provider’s qualifying worldwide revenue for the qualifying period. The fee owed to Ofcom would be determined with reference to the worldwide revenue over the charging year, and “any other factors that Ofcom consider appropriate”. Ofcom must produce a statement on what constitutes ‘worldwide revenue’. 

Ofcom may decide that certain exemptions are appropriate, and this would be approved by the Secretary of State from the beginning of the charging year. 

SoS power to repeal exempt content or services 

The Secretary of State will have the power by regulations to repeal the exemption that currently exists within the Bill for one-to-one live aural communications if they deem that it is appropriate to do so because of the risk of harm to individuals. They will have the same power to repeal the exception for comments and reviews on provider content if deemed appropriate to do so because of the risk of harm to individuals. 

On the other side of this, they will also have the power to provide a further description of user-to-user service or search service to be exempt, if the risk to the public is deemed to be low.  

If you want to speak to us about what this Bill might mean for you, get in touch. To see other changes in legislation that are on the horizon this year, download our regulatory tracker here

If, however, this raises other issues that you have long thought about (like many of us have here at Lawbox) in terms of your children’s safety online and whether you’re doing enough, we’re happy to share some of the practical tips that organisations like the NSPCC have shared…

 

Interested in how we can help your business?
Give us a ring, we would love to hear from you.
Get in touch