Anonymity: Vital for privacy or a source of harm?
There is a power in social media anonymity for people in difficult or dangerous situations who are given a voice without fear of vilification. I am talking here about individuals suffering from political repression or even social condemnation in a world of 'cancel culture'. A key to this power is that real names and any personal data behind a social media account are rigorously protected by social media platforms. However, the attraction of online anonymity has led to an increase in fake social media accounts, online trolling and harmful online abuse with little to no accountability. As such, there is an increasing pressure for social media platforms to take responsibility for the online harms that can be caused on their platforms.
The UK government has published the draft Online Safety Bill which establishes a new regulatory regime to address illegal and harmful content online by increasing the accountability and liability of social media platforms. In addition, Facebook, Youtube and Twitter have been working together to try to agree on a definition of 'hate speech' and how to address the balance of taking down abusive or harmful content while protecting principles of free speech. These actions focus on solving the problem of online harm by identifying and taking down content. But what about focusing on the source of the content at the time of registration?
Following the recent football Euro finals, three English players were subject to abhorrent racial abuse online. Facebook and Twitter were quick to condemn the behaviour and take down the offending content. However, the events have sparked an interest to shift the focus from content monitoring to registration and individual identification.
Facebook removed approximately 1.3 billion fake Facebook accounts in Q1 2021 and still estimates that 5 percent of its profiles are fake. A vast amount of fake accounts are created by software programs created to set up false accounts. However, there are also accounts set up by individuals purporting to be someone they are not. These practices make accountability difficult if harmful content is posted from that account. A solution may be to introduce ID verification for all social media accounts. The intention being that while the external user name could still be anonymised (thus protecting privacy and free speech principles), there would be a person accountable for the activities on every account. But is this an effective approach to minimum online abusive or hateful content?
For ID verification to be an effective step towards combating online abuse, social media platforms need to foster public trust in the digital ID process as well as the platform's compliance with data protection and security laws. The UK government has recently published the "UK digital identity and attributes trust framework" which endeavours to establish a trust framework for the use of digital identity products. While this policy paper is an important step towards fostering public trust, organisations must ensure they are transparently complying with data protection and information security laws which ensure customers ID and personal data behind an account are protected. Only with this compliance will consumers, particularly those relying on anonymity for a protected voice, be comfortable agreeing to online ID verification.
Introducing ID verification for social media accounts would be a positive step towards reducing harmful content facilitated by individuals hiding behind anonymity. Organisations can then take an approach tackling two important causes of online harm by 1) ensuring accountability for every account at the registration stage through digital ID verification, and 2) progressing cooperation and complying with increased regulation around monitoring and removing content.
Social media ‘must start verifying user ID to end online abuse’