Skip to main content
Comment & Opinion

To Regulate or not to Regulate (Artificial Intelligence) – that is the question

Emerging technologies can give law makers a headache, and possibly none more so than Artificial Intelligence. In this article the Walker Morris Tech & Digital Team explore the complex issues around regulation of AI.

Back in the 80’s a similar conundrum arose in relation to software. How best to exploit the benefits of new technology whilst protecting the intellectual creativity that goes into developing code? In that case it was easy enough, it was decided to treat it like a book and give it copyright protection.

Artificial intelligence and machine learning creates all sorts of more complex issues and scenarios (not just legal) and as these tools improve, we see governments wrestling with the various legal and moral issues that emerge.

The UK government published a data strategy paper recently[1] (see our earlier note here) and over the summer the European Union consulted on a regulatory framework for AI. Whilst we are unlikely to be bound by European law makers, it is interesting to understand their thought processes and conclusions, as they are bound to have some influence on UK government’s thinking.

Artificial intelligence raises many legal questions:

  • It uses and creates large volumes of data – who owns or controls that data? Case law has established that data in itself is not tangible property; it cannot be stolen or owned per se. Rights can arise in relation to data through other legal mechanisms for example, copyright in written material, database rights, confidentiality rights or under specific contractual terms. As stated by the House of Lords in its paper (Artificial Intelligence Report of Session 2017–19 HL Paper -AI in the UK: ready, willing and able?) it may be much clearer to talk in terms of “controlling” data in an artificial intelligence scenario. A question for law makers will be whether to create further legal rights and obligations in relation to data that emanate from AI tools, from robotic activity, or other machine learning algorithms. For now it remains important to deal with these issues clearly in the contract.
  • Some of the data processed using AI tools will be personal data. In complex systems using machine learning algorithms, there could be uncertainty around who takes responsibility for the security of that personal data and who must ensure that it is processed in accordance with data protection laws. We have already seen debates on whether an entity is a controller or processor in a straight forward contracting scenario, how much more complex will these issues become in an AI/robotic world? Can an algorithm be a data controller? If a data protection impact assessment is needed how can the controller of the data ensure the transparency or “explainability” necessary to ensure the impact assessment is lawful. We are already seeing useful commentary and guidance on these thorny issues from our Regulator, but do they need more regulation? For a relatively new set of regulations that famously boasted technology neutrality but only included a single article directly addressing automated decision-making, is the existing GDPR framework really capable of managing such complex issues, and if not how will that affect individual rights and freedoms?
  • Is a robot or an algorithm a legal person with contractual or tortious responsibility for the actions it takes? Can it own IP? Will algorithms or robotics be treated like employees with their owners or controllers being vicariously liable for acts? – is there a point at which vicarious liability ends and insurance steps in?
  • Do we need a new set of moral and ethical codes around the use of AI tools for example in the area of weapons and healthcare?
  • How do we live alongside AI and what sort of society do we want to be? How do we deal with the employment issues that AI may bring in terms of job losses- should we be seriously considering employing some kind of UBI?
  • How can regulators protect competition in digital markets and prevent abuses of power by dominant tech companies who acquire huge volumes of data?
  • Should dominant yet essential AI technology be entitled to ordinary legal protections? Or should we require that such essential technology be shared and licensed on fair, reasonable and non-discriminatory terms?

Some of the above questions have recently been addressed by the UK High Court in the case of Thaler v The Comptroller-General of Patents, Designs and Trade Marks. Dr Thaler in this case was the owner and developer of an AI system called DABUS. Dr Thaler filed two GB patent applications in his own name, but stated that he was not an inventor of either invention. Instead he claimed that DABUS was the inventor. In his judgment Marcus Smith J held that DABUS is not a person, in the strictly legal sense, and as such could not be considered an inventor in accordance with section 7 of the Patents Act 1977. Interestingly, Dr Thaler made no attempt to claim that he was the inventor of the patents, by virtue of his ownership of the DABUS system, because he considered that he would illegitimately be taking credit for an invention that was not his. It is widely considered that if he had named himself as the inventor, that the patents most likely would have been granted. Though Dr Thaler’s appeal was ultimately rejected, it was accepted by all parties involved that work is needed to address the way AI inventions are handled by patent systems.

Some work is already underway to review these issues. The World Intellectual Property Organisation (WIPO) is consulting on what changes there may need to be made to IP laws as a result of AI development. WIPO held a First Session of the Conversation on AI and IP in September 2019 and a Third Session is due in November this year. WIPO has published a draft issues paper for consultation to provide the basis for a shared understanding of the main questions that need to be discussed or addressed in relation to IP policy and AI. Interest is high (unsurprisingly); more than 250 submissions were received in the consultation process and over 2,000 people from 130 countries, joined a meeting in July this year in a virtual format. WIPO is currently developing preliminary considerations for IP policy on a number of questions raised by AI for IP policy for discussion by Member States and other stakeholders.

Following Brexit we will no longer be required to follow Europe from a regulatory perspective, but it is hard to imagine that the European decisions will not have some influence on the UK’s direction of travel in relation to regulating AI.

The European Commission has published a White paper on Artificial Intelligence technologies in the European Union and subsequently a summary of the responses to its consultation on AI[2]. Its findings suggest a high degree of support in Europe for further policy and regulatory initiatives, both in order to ensure safe and fair practices and to encourage technological progress and investment.

In its White Paper, the European Commission had identified six possible key areas that may be dealt with in future regulations, those being:

  • handling data that is used in training AI tools
  • data and record-keeping requirements
  • requirements for information to be provided concerning how the AI operates
  • standards for robustness and accuracy
  • human oversight requirements
  • specific requirements for particular types of AI applications (e.g. for remote biometric identification)

On safety and liability aspects, the report also delves into possible policy and regulatory options for the EU, most notably a revision of the Product Liability directive to cover particular risk created by AI tools.

The results of the Commission’s consultation on the White Paper[3] show that every area of regulation proposed by the White Paper received support from more than 80% of respondents. A clear majority (62% of respondents) also favoured some combination of pre-emptive and reactive market surveillance, which suggests that the Commission may be likely to adopt some early measures of intervention, as well as the expected after the event investigations.

Conversely, only 42% of respondents actively requested the introduction of a new regulatory framework for Artificial Intelligence. This may instead reflect support for expanding the remit of existing European regulators to tackle AI concerns. For example, the Commission is currently consulting on the introduction of a new Digital Services Act, which proposes to introduce pre-emptive regulation for large digital platform providers, such as Google and Facebook. As these platforms rely heavily on big data and AI, any such regulator will need to be empowered to examine and rule on the functioning of platform algorithms to be effective. The Digital Services Act may therefore contribute a further step in developing a piecemeal regulatory framework for AI. The debate around regulating before something happens or waiting to intervene afterwards (“ex ante” versus “ex post”) is set to rumble on with some commentators expressing concern that this will stifle innovation.

Next Steps

In the UK the position is still being debated and different bodies have different views. Some believe regulation is urgently needed, whilst others consider it to be too early to do anything sensibly and the impact of the technology needs to be deployed more widely before a decision about what is needed from a regulatory perspective can be determined.

What is clear though is the need to take a holistic approach. What do we mean by that? Well, the consequences of AI impact in different ways and its legal implications cannot be looked at in silos e.g. only from a data protection perspective or only from an antitrust perspective – it needs to be reviewed as a whole and the risks and benefits carefully weighed. This is particularly true for technologies like “creative AI” (AI that can build things or create materials like videos). It may be too early to build laws controlling these sorts of tools, but it is clear that the full impact of creative AI (for example, one that builds weapons) could have far reaching consequences.

It’s going to be an interesting journey – we’ll keep you posted!

 

[1] Department for Digital, Culture, Media and Sport, ‘Policy Paper: National Data Strategy’ (9 September 2020)

[2] European Commission, ‘Open Public Consultation of the European White Paper on Artificial Intelligence’ (17 July 2020)

[3] European Commission, ‘Open Public Consultation of the European White Paper on Artificial Intelligence’ (17 July 2020)

Artificial_intelligence_graphic_user_interface_data_technology