Artificial Intelligence and Data Protection

The use of personal data has been both revolutionary and evolutionary. The explosion in the availability of personal data came from technological developments such as the creation of the personal computer, social media platforms, and smart phones. The ability to track and record our interactions in communications, the creation of large volumes of publicly available data, and the ability to buy datasets has led to a new technological innovation that many predict will have explosive consequences. 

ChatGPT 

ChatGPT is a massive technological leap in general purpose (or ‘generative’) Artificial Intelligence, linking machine learning and natural language. Developed by Open AI, it is a platform that uses a deep learning algorithm that can generate human-like responses to questions and prompts.[1] ChatGPT-3 was released to the public in November 2022 and has led to a surge in public debate on the benefits and risks of the development of Artificial Intelligence (‘AI’). It has also come to the attention of a number of data protection regulators globally and is top of the agenda of the United Kingdom (“UK”) government.

In March 2023, OpenAI’s ChatGPT service revealed users’ chat histories and credit card information to other users. This prompted Italy’s data protection authority to investigate ChatGPT and temporarily block it for the illegal collection of personal data and the lack of age verification for children.[2] The regulator’s view was that there is no legal basis to justify the massive collection and storage of personal data to train the algorithms for the operation of the platform. It noted that at times, ChatGPT provides results that do not correspond to the actual data, thus breaching the GDPR principle of accuracy. If true, this is a concerning aspect of the service, which is rapidly being adopted in multiple products and services.[3]

ChatGPT has now resumed in Italy, but data protection regulators in Germany, France and Canada have begun investigations. The European Data Protection Board has created a taskforce dedicated to ChatGPT, to help data protection authorities to co-operate and exchange information.[4] Wojciech Wiewiórowski, the European Data Protection Supervisor, has warned that data protection regulators will need to be prepared for another ‘Cambridge Analytica’ scandal given the pace of AI development.[5]

In the UK, the Information Commissioner’s Office (“ICO”) has published two blog posts in response to ChatGPT and AI development generally. The first is “Generative AI: eight questions that developers need to ask[6] and more recently “Don’t be blind to AI risks in rush to see opportunity” in which the ICO highlights that it will be checking whether organisations have addressed privacy risks if they are using AI.[7] The ICO is now offering an innovation advice service and will review an organisation’s intended use of AI within 10 to 15 days.

Data Protection and AI risks

The public’s opinion on the mass use of our personal data is difficult to gauge. It is recognised that the large technological companies provide valuable ‘free’ services. The service is of course not ‘free’ but provided in exchange for our personal data. Some argue that this trade-off is skewed because the financial benefit that technological companies gain from the use of our personal data far outweighs the benefit users of the services receive.[8] Sometimes the use of our personal data is ancillary to the provision of a paid service and many companies recognise that the data and metadata they hold is a valuable asset. With the recent developments in AI, there is again this split in public opinion between those that favour the benefits it can bring, against those that are wary of the potential risks. 

How far data protection legislation can be used to protect against AI harms is unclear. The ICO has highlighted in its latest guidance that where personal data is being processed in an AI system, an organisation cannot ‘trade away’ the requirement to comply with the data protection principles.[9] The UK GDPR/GDPR contain many provisions that are applicable to AI systems. Derived and inferred data are also considered to be personal data under the UK GDPR/GDPR.[10] The concerns that most people have about the use of AI are exactly those protected by the data protection principles, in particular: lawfulness, fairness, transparency, accuracy, and security.

The GDPR has been in effect now for five years and although the law on data protection is still developing and evolving, there is now a substantial amount of regulatory experience and published guidance. However, data protection from an individual standpoint still has its problems. While data protection gives individuals specific rights over their personal data, it places the onus on the individual to complain or bring infringements to the attention of the supervisory authority. In situations where personal data is being unlawfully processed, it usually follows that there is a lack of transparency of the processing that is taking place. Cambridge Analytica was only caught out because of a whistle blower, and it took years of scraping people’s images from the Internet before Clearview AI came to the attention of regulators.[11]

A further problem for data protection is the fact that the development of new technologies and AI can have a collective impact on society, rather than just on any one individual. Where the harm is collective, it can be difficult to bring class action lawsuits based on data protection breaches (in the UK in particular) because of litigation funding difficulties and because of the way the rules on collective action are structured. In Lloyd v Google [2021] UKSC 50 the claimants, despite the Supreme court accepting that Google’s DoubleClick Ad cookie had been unlawfully tracking Apple iPhone users, were still unable to recover damages from Google. 

The UK GDPR/GDPR does provide a ground for objection to profiling[12], but in reality, individuals have no meaningful control over their personal data becoming part of a machine learning model. Those models may use profiles that have been created in a multitude of ways, which could then be used to discriminate against people whose attributes do not match a certain profile. For data protection law to be effective, an individual has to first know that a legal or significant effect is actually taking place. If for example, a job, insurance, or housing is not being offered to someone on the basis of their profile (or because they do not match a specific profile) how would they ever be aware of this to be able to challenge its accuracy? 

In principle, provisions in the UK GDPR/GDPR do contain obligations that could safeguard the rights of individuals in the use of AI. Article 22(1) gives data subjects the right to object to solely automated decision making (i.e. a decision made without human involvement) where the decision produces legal or significant effects. Of note is that data protection regulators consider a decision which impacts a person’s behaviour or choices to be a significant effect, suggesting a broad interpretation of this provision.[13] Even where a decision is partly made by automated means, organisations are still expected to provide meaningful information about the logic involved.[14]

There are currently two preliminary hearing requests to the Court of Justice of the European Union (“CJEU”), asking questions that will be relevant in the context of AI development. In Case C-203/22, the CJEU has been asked for guidance on the definition of ‘meaningful’ in Article 15(1)(h) of the GDPR, which requires controllers to provide information about the logic of automated decisions and profiling. The ruling will provide clarity on the scope of disclosure that controllers must provide on algorithms. Case C-634/21 will examine third-party credit scoring used by financial institutions to make decisions on loans. The question for the CJEU is whether the ‘decision’ for the purposes of Article 22 applies to the credit scoring application itself, or the use of the scoring by the financial institution. The CJEU has also been asked about the scope of disclosure to the data subject, specifically whether the ‘weighting’ used in the scoring must also be provided or if it can be excluded on the basis that it is confidential business information.

AI Regulation

There is no single law in the UK that specifically regulates AI. The Department for Science Innovation and Technology released a white paper on the government’s approach to AI regulation in March 2023. Rather than introduce legislation, a non-statutory principles-based framework is being suggested that would be implemented by existing regulators such as the ICO, Ofcom, the Financial Conduct Authority and the Competition and Markets Authority. 

The UK wants to become a “science and technology superpower” by 2030 and sees AI development as central to this goal.[15] Since 2014, it has invested £2.5 billion in AI, and is investing a further £1.1 billion in AI projects.[16] The government’s ambition is to become a global leader in AI, including in its governance, however this is likely to be difficult given the introduction of the European Union’s AI Act. At present there is a stark difference in the UK’s approach to the regulation of AI with that of the European Union (“EU”). The UK government has stated that it is adopting a “light touch”, whereas the EU is heading rapidly towards adoption of AI regulation. The EU AI Act is currently in the trilogue process and is expected to be agreed by the end of this year. 

It is unclear if the UK government’s stance will now change given the public concern about AI in the light of ChatGPT. Any AI legislation will have to complement data protection law. From recent developments in generative AI, it is clear that the UK’s proposed Data Protection and Digital Information Bill does not contain the safeguards that will be necessary as AI is developed and deployed, and in fact aims to weaken existing data protection rules.

UK Prime Minister Rishi Sunak recently acknowledged that guardrails and regulation are required for AI, implying that the white paper published in March may not now represent the government’s intended approach.[17] The Prime Minister’s press office announced in June that the UK will host the first global summit on the regulation of AI later this year,[18] where perhaps the government’s position will be clarified. 

If you are interested in any further information or advice, please contact my clerks on 020 3179 2023 or privacylawbarrister@proton.me


[1] See Open AI’s website: https://openai.com/blog/chatgpt

[2] Garante per la Protezione dei dati Personali’s decision is available in Italian here. https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/9870847

[3] The following example, if true, is extremely concerning: “ChatGPT cooks up fake sexual harassment scandal and names real law professor as accused”, Vishwam Sankaran, the Independent Newspaper, 6 April 2023, available here (https://www.independent.co.uk/tech/chatgpt-sexual-harassment-law-professor-b2315160.html). 

[4] See the European Data Protection Board’s press release, available here: https://edpb.europa.eu/news/news/2023/edpb-resolves-dispute-transfers-meta-and-creates-task-force-chat-gpt_en.

[5] ‘A Cambridge Analytica-style scandal for AI is coming,’ Melissa Heikkila, 25 April 2023, MIT Technology Review. Available here: https://www.technologyreview.com/2023/04/25/1072177/a-cambridge-analytica-style-scandal-for-ai-is-coming/  

[6] ICO blog post, “Generative AI: eight questions that developers need to ask”, 3 April 2023. Available here: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/generative-ai-eight-questions-that-developers-and-users-need-to-ask/.

[7] ICO blog post, ““Don’t be blind to AI risks in rush to see opportunity”, 15 June 2023. Available here: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/06/don-t-be-blind-to-ai-risks-in-rush-to-see-opportunity/

[8] The professors Carissa Véliz and Shoshana Zuboff have written extensively about the impact of the digital world on society and privacy.

[9] ICO Guidance on AI and data protection, available here: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/

[10] See the Article 29 Working Party’s “Guidelines on Automated decision making and Profiling for the purposes of Regulation 2016/679”, revised and adopted by the EDPB on 6 February 2018.

[11] Christopher Wylie, who worked for Cambridge Analytica, exposed the company’s unlawful use of data to the Guardian Newspaper in 2018, see: https://www.theguardian.com/uk-news/video/2018/mar/17/cambridge-analytica-whistleblower-we-spent-1m-harvesting-millions-of-facebook-profiles-video. And the ICO press release on Clearview AI on 23 May 2022 is available here: https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/05/ico-fines-facial-recognition-database-company-clearview-ai-inc/

[12] Article 21(1) of the UK GDPR gives the data subject the right to object to the processing of personal data based on legitimate interests or public interest, including profiling, but it is a qualified and not absolute right.

[13] Ibid.

[14] ICO guidance on automated decision making, available here: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/individual-rights/automated-decision-making-and-profiling/ .

[15] See the Ministerial Foreword of the white paper, “A pro-innovation approach to AI regulation”, 29 March 2023. Available here: https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper

[16] Ibid.

[17] The Prime Minister discussed AI in an interview with Sky News. See “Experts say AI could pose same risks as nuclear war and pandemics, says Sunak”,  Dominic McGrath, the Independent Newspaper, 7 June 2023, available here: https://www.independent.co.uk/news/uk/politics/rishi-sunak-joe-biden-prime-minister-experts-lucy-powell-b2353305.html

[18] See the UK government’s press release on 7 June 2023: https://www.gov.uk/government/news/uk-to-host-first-global-summit-on-artificial-intelligence#:~:text=US%20tech%20leadership-,As%20the%20world%20grapples%20with%20the%20challenges%20and%20opportunities%20presented,today%20(Wednesday%207%20June)