Online Harms – the end of self-regulation for tech giants?
The Home Office and the Department for Digital, Culture, Media and Sport (DCMS) have published their long-awaited Online Harms White Paper. The White Paper proposes regulation to tackle ‘online harms’ – you can read the full White Paper here. At present, online companies such as social media platforms are not liable for potentially illegal content posted by their users until they have knowledge of its existence and neglect to remove the content in good time. However, the Government now plans to introduce new laws which compel internet companies (e.g. social media and tech companies) to be more accountable for ‘harmful’ content produced and published on, shared by or accessible via their platforms and to be more proactive at removing illegal content.
The statutory duty of care
The Government has proposed the introduction of a new statutory duty of care that would require relevant companies to “take reasonable steps to keep users safe, and prevent other persons coming to harm as a direct consequence of activity on their services.” The ‘online harms’ that are covered by the duty of care are broad and range from child sexual exploitation and abuse, terrorist content, online anonymous abuse, cyberbullying to online disinformation, amongst others.
Who will the new regulation apply to?
‘Relevant companies’ are those that facilitate “hosting, sharing and discovery of user-generated content” or “public and private online interaction between service users”. This means that a wide range of companies will be caught by the new statutory duty of care, whether that be global social media platforms (e.g. Facebook or Instagram), public messaging services (e.g. Snapchat or WhatsApp), search engines (e.g. Google), cloud storage services (e.g. Dropbox) and public message boards (e.g. Reddit or other internet forums).
What will the regulatory framework be?
An independent regulatory body will be appointed to enforce the statutory duty of care and will publish codes of practice in due course for each type of online harm. The regulator will have broad enforcement powers including the ability to issue substantial fines of up to 4% of global turnover and publish public notices about non-compliant companies.
The Government is also consulting on additional enforcement powers that would be available to the regulator. These include the ability to disrupt business activities, require ISPs to take repeatedly non-compliant websites and apps offline in the UK, and to impose personal liability on members of senior management.
What are the ramifications for internet companies?
Under the proposals the relationship between companies that fall within the statutory duty of care and the regulator will be governed by proportionality. Companies themselves must take action that is proportionate to the severity and scale of the harm in question. Likewise, the regulator will assess the action of companies proportionate to their size, resources and the age of their users.
Companies may be made to publish annual transparency reports which show the frequency of harmful content on their platform and the measures that they are taking to address this.
Companies will also be expected to show how they are fulfilling their statutory duty of care, in particular by implementing relevant terms and conditions that are “sufficiently clear and accessible, including to children and other vulnerable users”. This may mean that internet companies have to re-draft their T&Cs once again, with many having done so in light of the introduction of GDPR in May 2018.
How severe will the impact be?
The proposed new duty of care is not as severe as some might fear. The White Paper notes:
- any new laws will be compatible with the EU E-Commerce Directive (2000/31/EC), which broadly treats providers of an ‘information society service’ (e.g. social media companies) as exempt from criminal or civil liability where their platforms are used for illegal activity;
- the regulator must define the duty of care and publish codes of best practice that internet companies must comply with;
- a company can demonstrate compliance with the duty of care in a manner not set out in the codes of best practice if they explain and justify how their approach delivers at least the same impact;
- there are a number of ‘online harms’ that are not subject to the duty of care, these being: (i) harms suffered by organisations (as opposed to individuals); (ii) harms resulting from a breach of data protection legislation, cyber security or hacking (as these are covered by other regulation); and (iii) harms suffered on the dark web (as opposed to the open internet).
Summary
The White Paper has so far received a mixed reaction. Given the live streaming of the recent Christchurch terrorist attacks on social media and the role of Facebook as a platform to incite offline violence in Myanmar, some have praised the White Paper for attempting to tackle the spread of illegal and morally reprehensible content online. However, others have criticised the White Paper for attempting to impose a draconian form of censorship and for creating an unacceptable incursion on freedom of speech.
The consultation period closes on 1 July 2019, following which we will await the Government Response. You can add your response to the consultation questions here. The identity of the regulator is also yet to be confirmed.
Companies caught by the proposed new statutory duty may wish to consider their ability to deal with ‘online harms’ which may include e.g. implementing measures to ensure harmful content is removed rapidly. Larger tech companies should be particularly aware of the potential new statutory duty of care given that, in the eyes of the regulator, these companies will be more likely to be regarded as having substantial resources available to them to combat ‘online harm‘
For more information please contact Connor Hearn on +44 (0)20 7203 8911 or at connor.hearn@crsblaw.com.