Data Protection – February 2020Print publication
Focus on artificial intelligence and emerging technologies; other recent developments.
Focus on artificial intelligence (AI) and emerging technologies
The past month has seen a flurry of activity in the fast-moving world of AI and emerging technologies such as blockchain, and the continuing discussions around the legal and ethical implications associated with them and their use.
During 2019, the Information Commissioner’s Office (ICO) started to develop its first auditing framework for AI (see the initial call for participation), which will “inform future guidance for organisations to support the continuous and innovative use of AI within the law”. A series of blog posts were published on a range of technology and innovation topics, including: the techniques organisations can use to comply with data minimisation requirements when adopting AI systems; key considerations for organisations undertaking data protection impact assessments for AI systems; and the steps that organisations can take to manage the risk of discriminatory outcomes in machine learning systems.
The ICO is now formally consulting until 1 April 2020 on draft AI auditing framework guidance for organisations. The guidance contains advice on how to understand data protection law in relation to AI and recommendations for organisational and technical measures to mitigate the risks AI poses to individuals. The guidance is aimed at technology specialists developing AI systems and risk specialists whose organisations use AI systems, including data protection officers, general counsel and risk managers. It aims to inform organisations about what the ICO thinks constitutes best practice for data protection-compliant AI. The ICO stresses that it is essential for the guidance to be both conceptually sound and applicable to real life situations, because it will shape ICO regulation in this space. Feedback from those developing and implementing AI systems is therefore considered essential.
On 19 February 2020, the European Commission presented a European data strategy and a separate white paper on AI, which “show that Europe can set global standards on technological development while putting people first”. See this comprehensive Q&A factsheet for a summary of the key points. The white paper sets out options to maximise the benefits and address the challenges of AI. The Commission presents options on creating a legal framework that addresses the risks for fundamental rights and safety. It says that a legal framework should be principles-based and focus on high-risk AI systems in order to avoid any unnecessary burden for companies to innovate. It was widely reported in January 2020 that the Commission was considering a five-year ban on the use of facial recognition technology in public areas, but this is not referred to in the white paper. The proposals in the white paper are being consulted on until 19 May 2020.
In its response, the European Consumer Organisation called for legally binding EU rules to establish AI rights for consumers, obligations on companies to be transparent and accountable about their use of AI, and powers for public authorities to ensure AI applications do not expose consumers to harm. In relation to data, the Organisation’s Director General commented: “Too much data is currently concentrated in the hands of a few industry players who use it exclusively for their benefit. Consumers would be better off if, for instance, companies like car manufacturers would give access to vehicle data to allow innovative mobility services to thrive. It is good that the EU wants to legislate how data can be used better but when it comes to personal data, it must always be the consumer to decide whether their data is collected and how it is shared. The objective to help companies compete with big tech should not happen at the cost of consumers’ privacy and autonomy.”
In a recently-published opinion on blockchain technology and the EU single market, the European Economic and Social Committee says that protecting privacy is key. It calls on the European Commission to examine the General Data Protection Regulation (GDPR) and propose revisions and further guidance on the relationship between GDPR and blockchain, saying that blockchain technology was mostly unknown when GDPR was prepared and therefore the potential tensions between the two need to be reviewed.
European developments are still relevant despite Brexit, not least because this is a discussion which arguably transcends national boundaries, but also because of what it means for data protection-compliant AI. While it is not yet known to what extent UK law will diverge from EU law in the future, it would seem unlikely that data protection rights will be weakened.
Back in the UK, the Committee on Standards in Public Life published a report on AI and its impact on public standards. The report contains a list of recommendations for government, regulators and public bodies using AI to deliver frontline services. The Committee’s message to government is that the UK’s regulatory and governance framework for AI in the public sector remains a work in progress and deficiencies are notable. It says that the work of the Office for AI, the Alan Turing Institute, the Centre for Data Ethics and Innovation and the ICO are all commendable, but on the issues of transparency and data bias in particular, there is an urgent need for practical guidance and enforceable regulation.
The Centre for Data Ethics and Innovation recently published its final report and recommendations on data-driven online targeting.
A bill to prohibit the use of automated facial recognition technology in public places and to provide for a review of its use was introduced in the House of Lords on 4 February 2020.
Walker Morris will continue to monitor and report on developments.
Other recent developments
On 12 February 2020, the Department for Digital, Culture, Media and Sport and the Home Office issued a joint initial consultation response to the Online Harms White Paper. See the press release and the ICO’s statement.
The government issued a call for evidence on online advertising. Written submissions are requested by 23 March 2020.
The European Systemic Risk Board (ESRB) published a report on cyber incidents. It summarises the latest estimates of the costs of cyber incidents, and shows that a cyber incident could evolve into a systemic cyber crisis that threatens financial stability. The ESRB has therefore identified cyber risk as one of the sources of systemic risk to the financial system which could have serious negative consequences for the real economy. See the press release.
On 18 February 2020, the European Data Protection Board (EDPB) published its contribution to the evaluation of GDPR, ahead of the European Commission’s evaluation and review of GDPR by 25 May 2020. The EDPB considers that GDPR’s application since implementation on 25 May 2018 has been successful and that it would be premature to revise the legislation at this point in time. Notable comments include:
- The EDPB is calling on EU legislators, in particular the Commission, to intensify efforts towards the adoption of an ePrivacy Regulation to complete the EU framework for data protection and confidentiality of communications.
- The EDPB emphasises that GDPR is a technologically neutral framework designed to be comprehensive and to foster innovation by being able to adapt to different situations without being complemented by sector-specific legislation – GDPR is fully applicable to emerging technologies and the EDPB will continue to elaborate on the impact of such technologies on the protection of personal data.
- There is a pressing need for the Commission to bring the existing set of standard contractual clauses in line with GDPR and to draft additional clauses that cover new transfer scenarios, in particular the adoption of a set of processor-to-processor clauses.