Data privacy: the challenges to artificial intelligence development
What is artificial intelligence?
The term artificial intelligence (AI), whilst commonly associated with autonomous vehicles and domestic robots, can broadly be defined as the development of computer systems that are able to perform tasks analogous to intelligent human behaviour. Driven by the advent of the Cloud and rapidly increasing volumes of digital data, AI developments have taken place in a number of areas, including machine learning: techniques by which computers learn by example and carry out pattern recognition tasks without being explicitly programmed to do so. In order to learn, machines need “big data”, often described as vast volumes of varied data arriving at high velocity. From a data privacy point of view, the biggest implication of AI is the use of big data; therefore properly safeguarding personal data has become increasingly important for businesses.
Impact of the General Data Protection Regulation (GDPR)
The aim of the GDPR, which will be directly applicable in all EU Member States from 25 May 2018, is to give individuals more control over, and the assurance of greater security for, their personal data. The importance of processing personal data fairly is preserved in the GDPR. Big data analytics can be characterised as a threat to privacy as it involves using complex algorithms and draws conclusions about individuals with sometimes unwelcome effects. A key question for organisations in this context is therefore whether the processing of personal data is fair. Fairness involves several elements, including the following:
1. Effects of the processing
An important concern is the potential bias induced in big data analytics. In some circumstances, even displaying different advertisements on the Internet can mean that the users of that service are being profiled in a way that perpetuates discrimination. The GDPR specifically provides that any person – the data subject – has the right “not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her” (Article 22(1) GDPR). The GDPR does not define “legal effect” or “similarly significant effect”.
However, the European data protection authorities - working together in the Article 29 Data Protection Working Party (WP 29) - have recently adopted Guidelines on Automated Decision-making, which are highly likely to impact AI-based services. According to the WP 29’s guidance, “a legal effect suggests a processing activity that has an impact on someone’s legal rights, such as the freedom to associate with others, vote in an election or take legal action”, and “for data processing to significantly affect someone, the effects of the processing must be more than trivial and must be sufficiently great or important to be worthy of attention”. Examples of automated decision-making include automatic refusal of an on-line credit application or e-recruiting practices without any human intervention.
Organisations therefore need to be aware of and factor in the effects of their processing on the individuals, communities and societal groups concerned. It is advisable for data controllers to use appropriate mathematical or statistical procedures for any profiling and take additional measures to prevent discrimination. Given the sometimes novel ways in which data is used in big data analytics, this may be less straightforward than in more conventional data-processing scenarios.
The complexity of machine learning can make it extremely difficult for organisations to be transparent about the processing of personal data. There is an intrinsic difficulty in providing an explanation for an outcome when that outcome is based on an AI algorithm as the logic behind the machine reasoning may not be expressible in human terms. In some instances it may not even be apparent to individuals that their data is being collected (e.g. their mobile phone location). This lack of transparency can mean that businesses miss out on the competitive advantage that often comes from gaining consumer trust. The GDPR is therefore just one of a growing number of forces driving explainable AI. In theory, explainable AI should produce more explainable models, while maintaining a high level of prediction accuracy. Explanation is essential in order to ensure transparency and businesses should start considering it at the early design stages in AI product development.
Organisations additionally need to consider whether the use of personal data in big data analytics is within people’s reasonable expectations. Deciding what is a reasonable expectation is inevitably linked to the issue of transparency. The view is often put forward that people are becoming increasingly less concerned about how their personal data is used. This is said to be particularly true of ‘digital natives’ – younger individuals who are happy to share personal information via social media. However, research suggests that this view can be too simplistic, given the complexities of AI and additional issues surrounding consent of processing of personal data.
Issue of consent
If an organisation is relying on an individual’s consent for processing their personal data, then that consent must be freely given, specific, and an informed indication that they agree to the processing. The GDPR provides that consent must also be “unambiguous” and a “clear affirmative action” such as ticking a box on a website. These requirements can particularly pose problems for Cloud-based voice assistants, for example. Will it be possible to ask for the consent of each individual present in a room before data is collected on what is being said?
Data privacy therefore poses a number of challenges to AI development. Undoubtedly, data protection awareness will become increasingly relevant and organisations should set up specific governance guidelines when dealing with AI, with such guidelines to address not only the overall technical and data inputting processes, but also a number of legal and ethical issues.
For more information, please contact Anjali on +44 (0)1483 252 576 or at Anjali.Chandarana@crsblaw.com.
News & Insights
Broadcasting and VoD in a post-Brexit 'no-deal' landscape
The Department for Digital, Culture, Media & Sport published guidance in relation to broadcasting and VoD, in the event of 'no-deal' Brexit.
Commercial Law Handbook: Second Edition
Edited by Partner David Berry, the Commercial Law Handbook examines the structure of the most commonly encountered transactions