Skip to main content
Comment & Opinion

AI in care and later/retirement living: Innovations and legal implications

AI in care and later/retirement living: Social and commercial context

The use of artificial intelligence technology (AI) in the care and later/retirement living sectors is increasing apace.  The global AI in healthcare market was valued at US$10.4 billion in 2021 and is expected to grow at a compound annual rate of 38.4% between 2022 and 2030 [1].  The Covid 19 pandemic afforded AI in these sectors an unexpected boost.  It prompted the deployment of detection and diagnosis, as well as companionship and communication, solutions more quickly than would otherwise have been the case.  With AI in care outcomes being largely positive to date, and with the constant development of new AI capabilities promising to further revolutionise various aspects of care and later/retirement living, the trend looks set to continue.

Against this clear social and commercial context, Walker Morris’ Retirement Living and Technology specialists Jo Stephenson, Lucy Gordon and Ryan Doodson offer practical insights into innovations and legal implications associated with the use of AI in care.

blue-sofa-with-patterned-blankets-and-cushions

Applications and innovations of AI in care and later/retirement living

The Covid pandemic and associated lockdowns incited an urgent rollout of certain AI solutions within the care and later/retirement living sectors.  Almost overnight, we saw care/healthcare facilities and later living/retirement living communities adopt solutions such as: AI-driven temperature- and symptom- detection; patient monitoring systems; virus/infection detection, prediction and modelling systems; automated communication and data-sharing between patients, relatives, care staff and medical professionals; AI-driven diagnosis and treatment provision; and AI-assisted cleaning and sanitising tasks, as well as drone and robotic deliveries of medication, food and essential supplies.

Apart from measures specifically implemented due to Covid, other increasingly prevalent AI in care solutions include the application of machine leaning and robotics to provide social interaction/company for patients/residents when 24/7 human contact is not possible, the administering of medication, assistance with mobility, the monitoring of patient wellbeing generally and communication with care providers.

Ongoing technological advances mean that AI in care innovations continue to abound.  What was once science fiction is fast becoming science fact. AI clinical surveillance can now ascertain a patient’s pain level from facial movement and can communicate with care or medical staff accordingly. Machine learning and analysis of behavioural patterns can provide prompts, assistance and medical treatment to delay the onset of dementia.  AI analysis of millions of data points can predict infections and chronic diseases, can speedily and accurately identify fractures, tumours and neuroloigcal abnormalities, and can suggest treatment options and plans. Rather than AI taking over the human element of care provision, the effective deployment of AI in care as an augmentation is helping to streamline processes.  It is freeing-up care providers’ time to focus on the core care needs of their patients and residents.

And it’s not just about medical and care provision per se.  AI in care also encompasses a wide range of ‘smart’ technology throughout care and retirement living communities.  Smart meters can improve energy efficiency; facial recognition can improve security; and AI-assisted lighting, heating, smart speakers and voice-assisted communications and operations can help to improve a resident’s all-round quality of life. Automating and augmenting some of the more ‘routine’ aspects of a carer’s role can also have significant social and mental health benefits for staff. That could be of interest and assistance to care and retirement living providers in the face of the ongoing recruitment crisis.  Big data collection and analysis can help care providers to better understand, and therefore address, operational issues. It can also improve efficiency and accuracy when it comes to regulatory compliance and corporate reporting. AI in care can therefore also assist with an organisation’s ESG (environmental, social, governance) agenda.

AI in care and later/retirement living: Legal implications

Like any emerging product, trend or market, the use of AI in care (healthcare and later/retirement living) does raise a number of legal questions. A particular challenge can arise when something goes wrong and the issue of liability falls to be decided.

Usually where there has been an error in the provision of care/support the party liable, whether a person or a company, is relatively easy to identify. However, a number of parties can be involved in AI in these circumstances: the designer, developer, manufacturer, programmer, provider of underlying data, the end-user (which, in the care/retirement living context, might be the care provider or the patient/resident) and the AI system itself. The involvement of multiple parties, and the potential for the AI system to develop itself over time via machine learning, make the question of which party – or parties – is ultimately responsible a more complex one.

The existing legal and regulatory framework in the UK does not provide a comprehensive solution.  It does not yet specifically address the use of AI and the allocation of legal liability. In September 2021, the UK government published the National AI Strategy, a ten-year plan to cement the UK’s position as a global AI ‘superpower’.  To support that ambition, the government issued its AI Regulation Policy Paper in July 2022.  Ahead of setting out further detail on any framework and implementation plan, the government explained, in that Paper, its intention to seek input from stakeholders across the AI industry. Progress in relation to those consultations or any AI White Paper is awaited.  Walker Morris is monitoring developments.

In the meantime, in the absence of a tailored legal and regulatory framework for AI, case-by-case application of existing fundamental legal principles should inform the determination of legal liability in the vast majority of circumstances.

Contract

In many cases, care providers (which for the purposes of this article includes providers of later/retirement living) will be able to proactively address the issue of legal liability upfront, by negotiating and agreeing bespoke contracts with the party supplying the AI tech. In those circumstances, the care provider and the supplier will be able to pre-emptively provide, in the contract, for certain ‘what if’ situations.  That may include agreeing how liability will be attributed/excluded. However, where AI is acquired ‘off-the-shelf’, it may prove more difficult for a care provider to negotiate. In those cases, the extent to which a care provider can secure contractual protection will largely depend on the terms offered by the supplier and the care provider’s bargaining position.

In any event, the AI contract will set out the parameters governing the provision of the system/service/product, including any provisions in respect of development (if bespoke), implementation, support and limitations regarding its functionality. Contractual terms offered by suppliers will generally include strict obligations on the care provider, for example requiring them to keep all AI software up to date and to comply with all instructions/stipulations regarding use. If the care provider failed to comply, that could invalidate any warranties, assurances and/or recourse that the supplier might otherwise provide. The supplier will invariably also want to cap its liability under the contract, and seek to exclude liability where this is permitted by law.

It is worth expressly mentioning another area likely to be covered by the AI contract – data protection. Naturally, the majority of AI/machine learning based products will require the input of data and, in the case of such products in care, significant volumes of personal data (including sensitive personal data). This being the case, the care provider should make sure that all requisite consents have been obtained, and that the AI contract offers robust and appropriate contractual protections in respect of the personal data, including the processing of it. This will often be an area of contention during negotiations, both in respect of the limitations on the supplier’s liability and in respect of intellectually property – particularly around the ownership of any outputs or learnings gleaned from the product.

A care provider should take specialist legal advice before entering into any AI contract with a view to maximising the contractual protections that it is able to secure.

Negligence

What happens if something goes wrong and a resident is harmed or property is damaged?  If a care provider has taken all required actions in respect of the AI, has provided adequate training, and used it as instructed, who is liable if a resident, relative or employee (or even the care provider itself) seeks to bring a negligence claim?

For liability to arise in negligence, a claimant will need to establish: a duty of care; that the duty was breached; and that the breach caused the injury/damage. Factors such as proper use and training of employees using the AI would be relevant when it comes to establishing the existence, or not, of any breach.  Liability could also depend on the nature or cause of the damage, which could be traced back to (one or more of) the manufacturer or some other individual or entity in the supply chain, such as the developer, designer or programmer. The potential involvement of multiple parties could further complicate the injured party’s ability to successfully pursue a claim. Where the AI is classed as a ‘defective product’, it may be possible to bring a claim in negligence directly against the manufacturer. A potential issue also arises in relation to ‘foreseeability’. One aspect of the recoverability of loss in negligence is that it must have been a foreseeable consequence of the defendant’s breach of duty. In a scenario where AI behaves in a way which is wholly unforeseen, or where its actions are fully autonomous (in other words it has acted based on its own decision-making through machine learning) it could be difficult to establish that any party was at fault.

Claims against AI?

Alternatively, what would be the legal position where it was alleged that the AI itself was at fault?

Currently AI can’t be held liable for its actions, as it doesn’t have separate legal personality. As things stand, a care provider (or resident or employee etc.) would not be able to make a claim against the AI itself. However, as the use/functionality of AI develops, it may be that certain AI could be given legal personality. Were that to happen, AI could become subject to both civil and criminal liability. That would then raise questions around what the ‘punishment’ for the AI could be. If monetary then, as the AI would not have its own funds, arrangements would need to have been put in place to enable it to meet its obligations.  That could occur at the contract stage, potentially through the use of compulsory insurance (see below) or other funding arrangements. Introducing a ‘no fault’ strict liability regime in respect of AI is one potential solution, with collective responsibility and funds raised through contribution to a centralised pool. (Albeit that comes with its own complications, not least because AI is so broad in its application and the level of autonomy may differ greatly from one system to another.)

Vicarious Liability for AI?

Another question is whether a creator/person in charge of AI/care provider could be found to be vicariously liable for the actions of AI, in the same way that it could be held responsible for the actions of its employees.  This is so far entirely untested and remains just a suggestion, but could be a mechanism for the government to consider as part of its wider AI regulation strategy.

Insurance

A workable option might be for care providers to obtain insurance to cover liability arising from their use of AI. AI and machine learning insurance policies are still in their early stages, but if demand for insurance grows at anything like the rate of growth we are seeing in the use of AI in care, it is anticipated that the market will provide solutions before too long. Alternatively, it could be that, in respect of certain AI, the same approach may be taken as with autonomous vehicles. The Automated and Electric Vehicles Act 2018 extends compulsory motor vehicle insurance to cover the use of automated vehicles in automated mode, so that victims (including the ‘driver’) of an accident caused by a fault in the automated vehicle itself will be covered.

Employment and Diversity

The care sector is experiencing a serious recruitment crisis. Whilst AI in care is clearly one potential solution, the deployment of AI brings with it a number of related requirements: finding staff with the skills to operate and use the technology to its full potential; rolling out training; updating disciplinary and data protection policies; and ensuring the AI is not discriminatory in its application. The Equality and Human Rights Commission is currently monitoring the use of AI by public bodies and has issued guidance on how AI systems can inadvertently cause discriminatory outcomes, including facial recognition technology.

AI in care: How Walker Morris can help

There is no doubt that the application, scope and deployment of AI in care is on the rise.  Alongside that, we can expect to see the emergence of a developing area of law which considers the allocation of responsibility and liability in respect of AI generally, and then, over time, more specifically in relation to AI in care.

But Walker Morris’ specialist Retirement Living and Technology experts understand that care providers and retirement living developers and operators are considering, and in many cases implementing, the use of AI now.

Our Retirement Living and Technology specialists are drawn from a dedicated pool of corporate, commercial, employment, real estate, regulatory and dispute resolution lawyers. As well as keeping our clients abreast of legal and regulatory developments, our specialists work seamlessly across all the various disciplines that come into play in the care and later/retirement living sectors. We provide proactive advice on contractual protections and liabilities, offer policy assistance with regard to regulatory and compliance and employer/employee matters, and provide sensitive and commercial advice when employment issues, claims or concerns do arise, or when regulatory investigations or crisis management are required. We advise on legal, regulatory and ethical issues when it comes to the deployment of AI in care and the impact on employees, patients/residents, and family members. And, if/when something unexpected occurs or something goes wrong in this fast-paced, developing space, we can provide risk management and dispute resolution advice and damage limitation and mitigation strategies.

Please contact Jo Stephenson, Lucy Gordon or Ryan Doodson.

 

[1] Technology magazine, 28 March 2022

Jo
Stephenson

Partner

Corporate

CONTACT DETAILS
Jo's contact details

Email me

CLOSE DETAILS

Lucy
Gordon

Partner

Employment & Immigration

CONTACT DETAILS
Lucy's contact details

+44 (0)113 283 4552

Email me

CLOSE DETAILS

Ryan
Doodson

Senior Associate

Commercial

CONTACT DETAILS
Ryan's contact details

Email me

CLOSE DETAILS