Out-Law Analysis 8 min. read

Overcoming barriers to AI adoption – liability


Businesses can deploy artificial intelligence (AI) systems to their advantage, while managing the uncertainty and risk of liability associated with AI use, by paying close attention to how contracts with suppliers and customers are drafted and ensuring there is robust governance over their use of the technology.

The responsible use of AI across different industrial sectors has the potential to increase productivity and efficiency, and bring positive changes to society, the economy and the environment. PricewaterhouseCooper's study on global AI exploitation has predicted a potential $15.7 trillion contribution from AI to the global economy by 2030. Yet, businesses remain reluctant to deploy AI widely, partly due to the uncertainty around who bears liability when things go wrong.

In Europe some legislative changes are expected which may change the existing legal framework around liability for AI and make matters clearer. Nevertheless, experts from Pinsent Masons, together with Jacob Turner, a barrister at Fountain Court Chambers, explained in a webinar earlier this summer that there are practical steps businesses can take now to manage the risk of liability in their current commercial relationships involving AI. 

AI has utility across sectors

AI has transformative potential. In the life sciences sector, the use of AI is being explored across the life cycle of drugs and therapies: from drug discovery and development through to therapeutics, diagnostics and other healthcare and fitness apps to prevent disease. AI is getting increasingly sophisticated at performing human tasks more efficiently, quickly, for a lower cost and arguably, more safely than humans themselves. Significant benefits can be gained by the use of AI in the energy sector too: in the discovery of offshore energy resources, rig maintenance and process optimisation. Across sectors, AI systems are capable of achieving results that would be impossible even with specialist human input. In health care, for instance, doctors can spot cancers earlier with AI image recognition even where there is no sign of disease to the human eye.

Cerys Wyn Davies

Cerys Wyn Davies

Partner

Attributing liability for AI may be simpler in certain sectors than others, such as in life sciences where it is unlikely that AI systems will be allowed to function autonomously without any human oversight in the near future

Despite the benefits on offer, the rate of actual deployment of AI remains slow, partly owing to the risk of liability associated with the use of AI and the uncertainty around where this liability would lie.

Unlike other revolutionary technologies in the past which improved efficiencies, AI systems are inherently dynamic and continue to learn, change and improve with time. Whilst some AI systems currently automate activities that would otherwise require human effort, the technology is evolving towards systems which can act autonomously without any human effort or intervention to achieve results beyond human capabilities. Therefore, when there is a problem, these characteristics make it challenging to identify how and why an AI system went wrong. As Jacob Turner explained in our webinar, not every AI decision can be explained and it can be difficult to work out who, if anyone, should be held responsible for an AI system going wrong, especially where that system can make decisions autonomously and is constantly evolving.  

The liability issue is exacerbated by the complex multi-party ecosystems in which AI systems are likely to be deployed. The success of an AI system often depends on the quality and sufficiency of the data, which may come from several data providers. The procurement and the processing of personal data need to interface with the relevant privacy, confidentiality and data protection regimes. The designer of the system architecture and the parties involved in developing the AI algorithm determine how the data will begin to be used, although the more autonomous the AI system becomes, the less input these people may have. In addition, the manner in which the results from the AI system are used by suppliers, licensees or end-users at various points in the supply chain could determine whether or not the promised benefit is achieved. 

What businesses can do now to manage risk

Businesses seeking to deploy AI can ensure that their commercial arrangements with upstream suppliers and downstream customers set out the performance obligations expected from the AI system as precisely as possible. There are two core aspects to be considered.

First, provision must be made for the AI system to learn over time. To do so, 'use' data must be channelled back to the application to allow for that learning to occur. Secondly, the learning process of AI is autonomous and therefore not entirely predictable. The dynamic nature of an AI system means that a clear definition of the scope of AI performance, or a warranty relating to purpose, may not be practical. In addition, since data is a critical component in AI systems, appropriate contractual frameworks need to be conceived and implemented to ensure its seamless capture and processing. 

Rauer Nils

Dr. Nils Rauer, MJI

Rechtsanwalt, Partner

Providers … should be careful to balance their desire to minimise their own risk with assuring the customer that its AI-enabled product represents some value addition to its business

It is possible for AI providers to look to exclude their liability by appropriately drafted contractual disclaimers. Guidance from several organisations makes plain that limitations to AI systems should be pointed out to implementers, particularly for predictive models outputs that will be statistically based. Reliance on the outputs needs to be judged accordingly in the relevant context of its application.

In some cases, AI providers may seek to make no representations as to outputs in their contract drafting and put the emphasis and all responsibility on the implementer for subsequent decision making. However, providers adopting this approach should be careful to balance their desire to minimise their own risk with assuring the customer that its AI-enabled product represents some value addition to its business. Businesses must also be wary of how an AI system will in fact be applied in the market. Although there is little precedent, English courts may strike down overly broad disclaimers or exclusions of warranty under the Unfair Contract Terms Act 1977, especially where an AI system is making decisions without any human cross-check or intervention, such as in the case of online credit reviews.

Attributing liability for AI may be simpler in certain sectors than others, such as in life sciences where it is unlikely that AI systems will be allowed to function autonomously without any human oversight in the near future. Businesses in life sciences and other 'high-risk' sectors are therefore likely to encounter more traditional models of duties of care and attribution of liability.

Currently, the health care professional, or the relevant hospital trust, is held liable for mistakes that happen in medical care given to patients, whether or not with the assistance of AI systems. However, AI suppliers feeding into the health care system could be held to appropriately drafted performance obligations in their contractual arrangements: warranties around the exercise of reasonable skill and care in developing, testing and monitoring the AI system, fitness for purpose, and data quality and sufficiency. This will enable hospital trusts and health care professionals to be assured that an AI system can be safely deployed in a health care setting. Current practice in this sector could, in time, evolve towards a more realistic apportionment of risks between the AI supplier and the hospitals.

Setting up and strengthening AI governance frameworks

Robust and detailed governance mechanisms around the use of AI can help businesses adopting AI systems to address future risk of liability.

High-level ethical principles expected of AI systems, such as fairness, explainability, transparency and accountability, are now beginning to take a more granular form that can be implemented by businesses, thanks to the work of several governmental, non-governmental and international organisations. For instance, the UK Information Commissioner's Office (ICO) and the Alan Turing Institute have collaborated on detailed guidance to help organisations explain processes, services and decisions delivered by AI systems. The ICO has also published guidance on AI and data protection to help organisations mitigate data protection risks that may arise from the deployment of AI systems.

Ultimately, governance will be at the heart of managing risk and enabling adoption. The more granularity and detail around legislative and best practice principles, the more businesses will be guided as to what the principles mean from an organisational and technical perspective.

Best practice guidance is currently focused on the public sector and where use of personal data is involved. However, it is only a short step for guidance to cover private sector and general industrial application. More collaboration and cross-fertilisation of implementable governance actions across different sectors would benefit businesses.

Resolving contentious AI liability issues

In many ways, there is little difference between the complex multi-party ecosystem in which AI systems are deployed and the potentially multi-jurisdictional disputes that can arise and the highly technical systems integration disputes that many businesses in the IT sector will already be familiar with. However, AI disputes are likely to be more technically complex. This is because AI systems are constantly learning and changing. Disputes will involve detailed forensic investigations into AI development programmes and other technical subject matter. Given the technical and the jurisdictional complexities, it may be beneficial for businesses contracting with one another over the provision of AI systems to opt for arbitration proceedings to settle any disputes that may arise.

McIlwaine David

David McIlwaine

Partner

It is foreseeable that there will be US-type class action claims brought by a defined group of individuals against a business that uses AI

In some cases, liability will be attributed to the party at fault for harm arising from the use of AI. In other cases, businesses could find themselves liable regardless of whether they are at fault if a system of 'strict liability' is in effect. In other examples a hybrid model will apply, where parties are 'strictly' liable for certain defects, unless they have taken positive action that would make them eligible for statutory 'defences', such as those available currently in the EU product liability regime.

One area that can give rise to dispute is the potential use of personal data by some AI applications, such as in the life sciences sector. There may be claims for compensation brought under the GDPR and other data protection regimes. It is also foreseeable that there will be US-type class action claims brought by a defined group of individuals against a business that uses AI.

To address these risks, good governance and documentation procedures need to be put in place by businesses right from the inception of AI development programmes. It is imperative that AI businesses document progress and log actions to allow easier tracing of responsibility at a later stage. Adequate insurance could also assist in protecting against large claims – in some cases businesses may already be protected from certain kinds of AI failures by their existing cyber and data insurance, but existing insurance policies may be inadequate to protect businesses against more serious AI faults that cause injury to human life or damage to property.

Future outlook

Change to the legal framework around liability for use of AI is expected in the EU. Earlier this year the European Commission published a white paper on AI and an inception impact assessment, and opened public consultations on an overall AI regulatory framework that is favourable to investment and positions the EU as an ecosystem of 'excellence' and 'trust' for AI. The latest consultation ended earlier this month and the Commission has published over 130 responses that were received. The Commission is projected to file its proposal for a regulation in the first quarter of 2021.

Cameron Sarah

Sarah Cameron

Legal Director

A careful balance needs to be struck between regulatory clarity and over-regulation

Given AI's potential to be ubiquitously applied, it is unlikely that there will be a general AI Act that covers all uses of AI. Existing product liability regimes can be adapted so that they apply more clearly to AI applications and address the uncertainty of liability in unique AI contexts. However, a careful balance needs to be struck between regulatory clarity and over-regulation, given the consensus that existing legislation is largely adequate. However, the Commission's proposal for proportionate regulatory intervention in certain high-risk sectors, such as health care and transport, and in certain high-risk applications, such as the use of remote biometric identification and intrusive surveillance technologies, has been broadly welcomed.

Co-written by Sarah Cameron, Cerys Wyn Davies, David McIlwaine, Nils Rauer and Krishna Kakkaiyadi of Pinsent Masons. Contributions from Jacob Turner, barrister at Fountain Court Chambers.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.