Artificial Intelligence set to dominate
Artificial intelligence (AI) has been a hotly anticipated topic for some time, but experts are now suggesting AI will dominate industries and come even closer to consumers’ daily lives than first expected.
AI is a branch of computer science looking to find solutions to problems that requires intelligence when done by humans. In short, it involves the creation of “intelligent” machines. Knowledge engineering is a core part of AI – where machines act and react like humans when they have sufficient information about the world to implement knowledge engineering. Machine learning is another subset of AI – learning without human supervision – which stems from recognising patterns in data. Machine learning algorithms draw inferences without being explicitly programmed to do so – the more data it collects, the smarter it becomes.
How can AI impact Cybersecurity?
As more companies rely on IT systems in their infrastructure, the threat of a cybersecurity breach and the damage it can cause is fast growing.
The number of devices and amount of data businesses are required to analyse in order to detect and prevent cybersecurity attacks is increasing at a much faster rate than trained personnel are becoming available to manually analyse such data. As a result there is a noticeable gap between what the industry needs in order to avoid more advanced cyberattacks and the number of skilled personnel available to join the fight.
AI, and more particularly machine learning, makes the possibility of autonomous security systems a reality. AI is able to accelerate and automate security issue detection – with the ability to analyse vast amounts of data where a human would struggle, reducing the risks of human misinterpretation. First generation AI analyses the data, looks for threats and facilitates human remediation: people and technology working together to detect, prevent and remedy cyberattacks.
With AI being used more and more by hackers across the globe in cyberattacks (around 20 billion cyberattacks are thwarted every day), the threats to businesses are becoming more sophisticated – the methods of detection and prevention must follow suit. That said, at present, there are some notable limitations.
- AI is software that perceives its environment sufficiently to identify events and take action against a pre-defined purpose. The AI being deployed to counter cyberattacks is modelled on samples and is used like anti-virus signatures. To be truly progressive and autonomous AI needs to continuously learn within the customer’s environment in order to perform more effectively – it needs to learn on-the-job and not be modelled on predefined samples – without human influence.
- Using AI within cybersecurity also needs to be coupled with basic security processes that are already in place. A minimum level of basic patching, educating personnel and having clear security policies in place is a fundamental starting point – reportedly 90% of successful cyberattacks begin with a phishing email. Without those basic processes in place to ensure the breaches can be prevented (as well as detected through AI), AI will be of little use to businesses.
Legal and ethical implications
Whilst AI continues to evolve and its potential applications seem infinite, in order for AI systems to become more accepted their use must meet minimum ethical standards and with that comes the inextricably linked legal issues which need to be addressed by businesses. One of the key considerations is accountability.
An ever-present concern, particularly for lawyers, is the difficulty in establishing who, if an independent decision is made by a machine, is at fault (and who should be held liable) should something go wrong.
Decisions made through AI are based on machine learning and less on direct programming (where it is possible to trace back to defective programming or incorrect operation). Establishing the root cause of a problem which arises from machine learning, and tracing back defects to human error in order to attribute liability, is difficult and simply casting blame on the supplier of the AI device may not always be appropriate.
Estonian officials are working on legislation that will grant robots and AI legal status, whilst some industry experts have suggested the UK adopts a licensing model, similar to that in New Zealand, where an assessment system is introduced for all robotic and AI devices released to the market. Each device would require payment of a levy into a fund when released to the market – with greater risks carrying greater charges. The fund would enable compensation pay-outs where a particular device causes damage (“AI insurance” if you will).
The House of Lords has launched a public inquiry into advances in the field of AI considering “economic, ethical, and social implications” and we await the report (due by 31 March 2018) with interest. Whatever solution is adopted we must achieve transparency and put in place steps to achieve clear accountability: assessing whether responsibility can be attributed to a person, what types of liability are in issue and what impact any failure may have on the people concerned.
Companies need to make security a top priority and AI could be huge game-changer in countering cyberattacks more efficiently and effectively. But, with more AI-enabled products and services being introduced into the market, businesses and their advisers need to be prepared to address the difficult ethical considerations and the legal issues that follow (namely, liability) to ensure consumers are sufficiently protected.
For more information on this topic please contact Caroline on +44 (0)20 7203 5381 or at Caroline.Young@crsblaw.com.
Fiona Edmond and Mark Smith write for Property Week on data centres as an infrastructure asset class
The complexity of operational issues is something those new to the sector may not anticipate and interest is likely to increase.
Charles Russell Speechlys advises discoverIE on its acquisition of Antenova
discoverIE is a leading international designer, manufacturer and supplier of customised electronics to industry.
Coded messages for landlords and tenants
“What does the code of practice mean for landlords and tenants? Read more here”
Gareth Mills writes for Lexology Getting The Deal Through on technology disputes in Bahrain
The most common disputes occur following perceived or actual failures to deliver required technology services an lack of clarity.
Charles Russell Speechlys advises Acora on acquisition of Westgate IT
Westgate IT specialises in providing IT support to businesses in the South West.
Jason Saiban writes for Food Manufacture on the food industry's climate change challenge
The key challenge will be how the environmental targets are actually met.
Grab the tail by the horns - Why is tail spend so critical in today’s outsourced portfolio?
It’s usually invisible, but in all likelihood, you’ve got tail spend.
Charles Russell Speechlys advises Appital Ltd on £2.5m Investment led by Frontline Ventures
Appital is an Equity Capital Marketplace which aims to bring innovation to Equity Capital Markets.
Mark Hill writes for In-House Community Magazine on solutions templating, a new priority for in-house legal teams
Removing the burden from legal teams, contract managers and administrators.
Charles Russell Speechlys advises Metier on US$39m investment into Africa Mobile Networks
AMN builds, owns, operates and maintains mobile network infrastructure in Africa.
Olivia Crane quoted by SoGlos on the increasing issue of cyber fraud being faced by businesses in Gloucestershire
Cyber fraud has cost Gloucestershire businesses around £369,800 in the last 13 months.
Tattoos, athletes and image rights
Campaigns featuring athletes often include visible tattoos and a number of recent legal cases demonstrate the issues that may arise.
Blue Sky Linking
Daniel looks at Sky's recent success in obtaining interim protection from infringement of their broadcast rights
The regulation of big tech: a changing tide?
Sonia takes a look at the two main areas where the UK is increasing the regulation of Big Tech in 2021
Don’t Gamble on Bingo Ads, Warns ASA
The ASA has issued a reminder to advertisers that bingo adverts will be treated as gambling ads for the purpose of standards regulation.
Recording Phone Calls: Don’t take Consent for Granted
What if an interviewee who is being called and interviewed “live” does not actually know he/she is on live television?
Continuing Progress in the Sphere of Inclusive and Non-Discriminatory Advertising
The latest developments from the ASA, CAP and BCAP relating to the advertising regulators’ attempts to tackle discrimination in advertising.
eCommerce and the Post-Brexit State of Play
Key UK and EU legislation governing how online platforms deal with consumers and their business users.
Top 7 Data Protection Tips for Employers
Here are our top 7 data protection tips for employers.
There has been an increase in online phising attacks over the past year - but why?