Artificial Intelligence set to dominate
Artificial intelligence (AI) has been a hotly anticipated topic for some time, but experts are now suggesting AI will dominate industries and come even closer to consumers’ daily lives than first expected.
AI is a branch of computer science looking to find solutions to problems that requires intelligence when done by humans. In short, it involves the creation of “intelligent” machines. Knowledge engineering is a core part of AI – where machines act and react like humans when they have sufficient information about the world to implement knowledge engineering. Machine learning is another subset of AI – learning without human supervision – which stems from recognising patterns in data. Machine learning algorithms draw inferences without being explicitly programmed to do so – the more data it collects, the smarter it becomes.
How can AI impact Cybersecurity?
As more companies rely on IT systems in their infrastructure, the threat of a cybersecurity breach and the damage it can cause is fast growing.
The number of devices and amount of data businesses are required to analyse in order to detect and prevent cybersecurity attacks is increasing at a much faster rate than trained personnel are becoming available to manually analyse such data. As a result there is a noticeable gap between what the industry needs in order to avoid more advanced cyberattacks and the number of skilled personnel available to join the fight.
AI, and more particularly machine learning, makes the possibility of autonomous security systems a reality. AI is able to accelerate and automate security issue detection – with the ability to analyse vast amounts of data where a human would struggle, reducing the risks of human misinterpretation. First generation AI analyses the data, looks for threats and facilitates human remediation: people and technology working together to detect, prevent and remedy cyberattacks.
With AI being used more and more by hackers across the globe in cyberattacks (around 20 billion cyberattacks are thwarted every day), the threats to businesses are becoming more sophisticated – the methods of detection and prevention must follow suit. That said, at present, there are some notable limitations.
- AI is software that perceives its environment sufficiently to identify events and take action against a pre-defined purpose. The AI being deployed to counter cyberattacks is modelled on samples and is used like anti-virus signatures. To be truly progressive and autonomous AI needs to continuously learn within the customer’s environment in order to perform more effectively – it needs to learn on-the-job and not be modelled on predefined samples – without human influence.
- Using AI within cybersecurity also needs to be coupled with basic security processes that are already in place. A minimum level of basic patching, educating personnel and having clear security policies in place is a fundamental starting point – reportedly 90% of successful cyberattacks begin with a phishing email. Without those basic processes in place to ensure the breaches can be prevented (as well as detected through AI), AI will be of little use to businesses.
Legal and ethical implications
Whilst AI continues to evolve and its potential applications seem infinite, in order for AI systems to become more accepted their use must meet minimum ethical standards and with that comes the inextricably linked legal issues which need to be addressed by businesses. One of the key considerations is accountability.
An ever-present concern, particularly for lawyers, is the difficulty in establishing who, if an independent decision is made by a machine, is at fault (and who should be held liable) should something go wrong.
Decisions made through AI are based on machine learning and less on direct programming (where it is possible to trace back to defective programming or incorrect operation). Establishing the root cause of a problem which arises from machine learning, and tracing back defects to human error in order to attribute liability, is difficult and simply casting blame on the supplier of the AI device may not always be appropriate.
Estonian officials are working on legislation that will grant robots and AI legal status, whilst some industry experts have suggested the UK adopts a licensing model, similar to that in New Zealand, where an assessment system is introduced for all robotic and AI devices released to the market. Each device would require payment of a levy into a fund when released to the market – with greater risks carrying greater charges. The fund would enable compensation pay-outs where a particular device causes damage (“AI insurance” if you will).
The House of Lords has launched a public inquiry into advances in the field of AI considering “economic, ethical, and social implications” and we await the report (due by 31 March 2018) with interest. Whatever solution is adopted we must achieve transparency and put in place steps to achieve clear accountability: assessing whether responsibility can be attributed to a person, what types of liability are in issue and what impact any failure may have on the people concerned.
Summary
Companies need to make security a top priority and AI could be huge game-changer in countering cyberattacks more efficiently and effectively. But, with more AI-enabled products and services being introduced into the market, businesses and their advisers need to be prepared to address the difficult ethical considerations and the legal issues that follow (namely, liability) to ensure consumers are sufficiently protected.
For more information on this topic please contact Caroline on +44 (0)20 7203 5381 or at Caroline.Young@crsblaw.com.