News Detail Banner
All News & Events

Client Alert:  The Brave New World of Artificial Intelligence: Legal and Regulatory Challenges

十月 26, 2023
昆鹰备忘录

Rapid scientific and technological advances in artificial intelligence (AI) offer undeniable opportunities for businesses across a range of sectors, but they also present certain challenges, both within the framework of current laws and regulations, as well as having regard to the various proposed changes to the regulatory regime. Being aware of those challenges – and, to the extent possible, taking steps now to address them and plan for when they arise in the future – will allow companies to reap the benefits of AI, whilst minimising any associated risks.

            The focus of this Memorandum is on the AI regulatory landscape in the UK, and the EU. It begins by providing an overview of the current UK legislative landscape covering aspects of AI use, before summarising the proposed approach to regulation in the UK and in the EU. The Memorandum then discusses several recent cases in which the English courts have considered AI-related issues. It concludes with some practical suggestions for the steps businesses can take now to minimise risks of AI use against the backdrop of rapidly evolving AI technology, and current and anticipated future AI regulation.

I. Introduction

            There is no single, universally accepted definition of AI, although it may be helpful to think of AI as technologies that enable computers to simulate elements of human intelligence – such as perception, learning and reasoning.[1] IBM notes that AI is “increasingly used as shorthand to describe any machines that mimic our cognitive functions such as “learning” and “problem solving””.[2]

The following categories of AI can be distinguished:[3]

  1. Narrow AI – designed to perform a specific task (such as speech recognition), and cannot adapt to perform another task. Narrow AI is designed to assist, rather than replace, the work of humans.
  2. Artificial general intelligence (AGI; also referred to as “strong” AI) – a system able to undertake any intellectual task that a human can, and that can reason, analyse and achieve a level of understanding that is on a par with humans. AI systems have yet to achieve AGI. 
  3. Machine learning – a method that allows a system to learn and improve from examples, by finding patterns in large amounts of data, which it can then use to make predictions. The AI can then independently amend its algorithm based on the accuracy of its prediction.
  4. Deep learning – a type of machine learning the design of which has been informed by the structure and function of the human brain and the way it transmits information. Deep learning is applied in so called ‘foundation models’, of which ‘large language models’ (LLMs), such as ChatGPT, are one example. Deep learning models can be adapted to do a wide range of tasks (having been trained on very large datasets), despite not having been trained expressly to do those tasks.

            In the widest sense of the term, AI has, for some time now, been an integral part modern daily life – from voice assistants, to facial detection and recognition systems, to chatbots. More recently, however – and propelled by rapid advances in AI technology – public discourse on the opportunities and risks of AI (and, with that, proposals for necessary AI regulation) has taken centre stage. In his speech on 26 October 2023 ahead of the AI Safety Summit in London,[4] for example, Rishi Sunak, the UK Prime Minister, said that “new dangers and new fears” brought about by AI should be addressed “head on”, warned about risks of AI technology (going as far as suggesting that, “in the most unlikely but extreme cases, there is [...] the risk that humanity could lose control of AI completely”), and announced the creation of the UK’s new AI safety institute.[5]  This rapid progress has been exemplified by OpenAI’s GPT-4,[6] released in March 2023, which will, at a click of a button, write a love poem to your significant other, and create a recipe from a disparate collection of ingredients. AI technologies are also being successfully deployed in other, less immediately obvious but extremely important areas, such as medical research, where Google DeepMind’s AlphaFold can now predict 3D models of protein structures, thus revolutionising drug development;[7] medical imaging; nuclear science, where AI is being applied to the search for commercially viable nuclear fusion technology;[8] as well as other scientific areas – such as the process developed by Meta to create concrete omitting 40% less carbon,[9] thus facilitating efforts to combat the client emergency.

            Rapidly accelerating scientific advances bring to the fore questions regarding appropriate AI regulation. While proposed approaches (notably, in the UK and the EU) differ, it is generally accepted by policy makers across the board that additional regulation beyond the current legislative framework is needed. Recent developments which have prompted discussions on AI  regulation include Meta’s July announcement that it intends to make Llama 2[10] open source, which Professor Dame Wendy Hall, Professor of Computer Science who co-authored the UK government’s AI Review published in 2017, compared to giving people a template to build a nuclear bomb, and questioned whether the industry could be trusted to self-regulate;[11] the UK Government reportedly considering new legislation to require all companies to label all AI-generated content with a ‘watermark’ in an effort to combat ‘deep fakes’;[12] and WorldCoin’s recent eyeball scanning programme, as part of which volunteers agreed to provide their biometric data in exchange for digital tokens, which prompted the UK Information Commissioner’s Office (ICO) to issue a statement.[13]

II. The current legislative landscape

            At present, no laws exist in the UK that were expressly intended to regulate AI.

            Instead, different aspects of AI are regulated through a patchwork of legal and regulatory requirements originally intended for other purposes, but which now also capture uses of AI. Accordingly, certain areas (notably, intellectual property law) appear to be ripe for reform, although, as noted below, no imminent changes are presently envisaged on the legislative horizon.

By way of a non-exhaustive list of examples:

  1. UK data protection law: The UK GDPR and the Data Protection Act 2018 impose various requirements in relation to the processing of personal data. The basic principles enshrined in the UK GDPR regime – including lawfulness, fairness and transparency; accuracy; integrity and confidentiality; and accountability – apply to processing personal data, including where AI is engaged.

The UK data protection regime also imposes different obligations on data processors and data controllers. Broadly, an AI provider as a controller has direct duties to the data subjects, whereas as a processor, it only has direct duties the controller. The distinction can, however, be ambiguous in practice, particularly when overlaid with the complexities of AI systems.

A further important aspect of the UK GDPR regime relevant in the context of AI systems is profiling and automated decision making, defined as follows:

any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movement”.[14]

The UK GDPR regime grants the data subject a right “not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”.[15] Whilst this right is qualified, and subject to certain exceptions,[16] even when one of those exceptions applies, the data controller is required to “implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision”.[17]

In that context, the ICO warned organisations in October 2022 to assess the public risks of using emotion analysis technologies[18] before implementing these systems. The ICO further cautioned that “[t]he inability of algorithms which are not sufficiently developed to detect emotional cues, means there’s a risk of systemic bias, inaccuracy and even discrimination”.[19]

It should be noted, however, that the Data Protection and Digital Information (No. 2) Bill currently going through Parliament[20] casts Article 22 as a right to specific safeguards (as opposed to a general prohibition on solely automated decision-making).[21] It also clarifies that a “solely” automated decision is one that is taken without any meaningful human involvement.[22]

  1. Equality law: the Equality Act 2010  (EqA 2010) prohibits discrimination by employers on the grounds of any protected characteristics (including age, sex or race).[23] If, as is widely accepted, AI systems can exhibit biases because of the ways in which they are trained, this could lead the use of some AI tools that make or influence workplace decisions to be unlawful.[24]
  2. Intellectual property law and copyright: there are several aspects in which copyright law is relevant to AI. Assuming that AI generated work meets the “originality” test[25] (which itself is unlikely to be straightforward), there is a question as to who owns a work generated by AI without human intervention, and thus enjoys the protections under the Copyright Designs and Patents Act 1988 (CDPA).

Under the CPDA, the general rule is that the first owner of copyright will be the author.[26] The CPDA further provides that “[i]n the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken”,[27] where “computer-generated” means that the work is generated by computer in circumstances where there is no human author of the work.[28] The application of these provisions to AI generated content is likely to be fraught with difficulty; in particular, the meaning of “undertaking the necessary arrangements” remains uncertain in the context of AI, and will require clarification, whether through case law, amendment, or regulatory guidance.

Separately, another aspect of AI use in the copyright arena is copyright infringement, specifically using copyright protected works to train AI systems, which involves, at a high level, feeding large amounts of data into AI’s ‘brain’. Under the CPDA, the acts that infringe copyright include copying a substantial part of a copyright protected work, either directly or indirectly,[29] such that there has been copying of elements which reproduce the expression of the author's intellectual creation. This is subject to an exception for certain “permitted acts”, including the making of copies of text and data analysis for non-commercial research (TDM),[30] which is likely to be relevant in AI context. However, the TDM exception is itself subject to limitations: notably, it is limited to research for a non-commercial purpose. In its response to the Intellectual Property Office’s (IPO) consultation published in June 2022, the government stated that it proposed to widen the TDM exception so to allow TDM for any purpose, with no possibility for an opt out by rights holders.[31] However, it now appears clear that the proposals to widen the scope of the TDM exception will not be proceeding.[32]

As far as patents are concerned, the position appears to be clearer than that in respect of copyright. Under the Patents Act 1977, a patent for invention may be granted primarily to an inventor or joint inventors.[33]  There is no regime akin to that for copyright for computer-generated works, and the “actual deviser of the invention” should be a “person”.[34] The Patent Court’s 2020 decision in Stephen L Thaler (discussed in Section IV below) illustrates this point.

  1. Online Safety Bill: the draft legislation, which aims to protect children from harmful online content, is set to become law next year. It will cover both search results generated by chatbots, and content systems such as ChatGPT post on social media.[35]

III. Proposed approach to AI regulation

                As an overarching observation, the proposed approach to AI regulation differs markedly in the UK, on the one hand, and the EU, one the other, with the former being principles-based and adaptive,[36] and the latter detailed and prescriptive.

            It remains to be seen how these competing approaches will work in practice, and which approach ultimately delivers better, more sustainable regulatory outcomes. For the time being, businesses should therefore consider how they are likely to be affected by the principles underpinning the applicable regulatory framework, once implemented, and the steps they can take now to facilitate future compliance. Businesses operating in the UK and the EU should consider those matters particularly carefully, as compliance with the UK rules may still mean failure to comply with the EU regulations, given the very different nature of the two regimes.

  1. The UK: a principles-based approach

On 29 March 2023, the UK government published a white paper entitled “A pro-innovation approach to AI regulation” (White Paper),[37] which sets out a framework for the government’s plans (Framework).[38]

The consultation on the White Paper closed on 21 June 2023, and the government’s response is expected later this year. Regulators will then be expected to issue guidance on how to apply the principles set out in the White Paper in the next year.

As noted, the UK approach can, in broad summary, be characterised as ‘principles-based’. These principles (Principles)[39] are assumed to be applicable across sectors, and aim to capture key elements of responsible AI design, development and use.[40]

The Framework, which does not of itself amend the scope of the existing legislative regime relating to AI, encompasses a proposal to adopt a risk-based and sector-specific approach to regulation, which can be seen as more agile, flexible, and easily adaptable to the rapidly developing nature of  AI technology, and the risks and opportunities that presents.

The underlying principle on which the Framework is based is that it is designed to regulate the outcomes of AI systems, rather than regulating the technology itself. For this reason, the White Paper also does not contain a fixed definition of AI. Instead, the definition of AI focuses on AI’s ‘adaptivity’ (being the capacity of AI systems to learn from data and perform new forms of inference not directly envisaged by their human programmers), and its ‘autonomy’ (AI’s ability to make decisions without express human intent or ongoing control). The consequence of this is that the output of AI systems, and the logic through which they are generated, are difficult to explain and predict.[41]

The White Paper focuses in particular on ‘foundation models’ within the umbrella of AI systems (see above, under “Deep learning”)[42] – which would include Open AI’s GPT-4 large language model. The White Paper notes that, given the transformative impact of those models, care needs to be taken in assessing how they interact with the proposed Framework. The basic premise is that the adaptable approach proposed to be adopted within the Framework is appropriate for the regulation of ‘foundation models’.

Given the sector-specific and principles-based approach of the Framework, it is proposed that the implementation of the Principles would fall within the remit of existing regulatory bodies, such as the CMA, Ofcom and the Human Rights Commission. It is not proposed, accordingly, that a standalone body would be established to regulate AI uses in line with the Framework.[43]

The White Paper also proposes that the Framework should be supplemented  by various tools for “trustworthy AI”, such as assurance techniques, voluntary guidance and technical standards.

It is also worth noting that the Competition and Markets Authority (CMA), the UK competition regulator, recently published its own report titled “AI Foundation Models: Initial report” (CMA Report),[44] which considers competition and consumer protection issues in the development and use of AI foundation models.

The initial review[45] in the CMA Report considers, in broad summary, how foundation models are developed and deployed, potential outcomes for competition in the development of those models, the impact on competition in other markets, potential outcomes for consumers, and the potential role for regulation in respect of foundation models.

In the same vein as the White Paper, the CMA Report considers that no regulation of foundation models is needed at this stage, and that a set of flexible guiding principles[46] should be introduced instead to ensure that competition and consumer protection remain an effective driving force as the development and deployment of foundation models – which overlap, to a significant extent, with the Principles set out in the White Paper.

  1. The EU: a prescriptive approach enshrined in legislation 

            As noted above, the approach to AI regulation intended to be adopted within the EU can be viewed as a polar opposite of the UK approach.

            Specifically, the EU has been, for several years now, attempting to agree a detailed and prescriptive AI regulation (AI Act, first proposed in April 2021). The AI Act, which proposes a single definition of AI,[47] is designed to regulate AI systems based on their level of risk to persons, prohibiting certain particularly harmful AI practices, introducing a set of mandatory requirements for ‘high-risk’ AI, and imposing moderate transparency requirements for ‘low-risk’ AI.[48]  The AI Act is not expected to become law before the end of 2023 (with a further two years before coming fully in force).

            One of the concerns that has been expressed in relation to the proposed EU approach is that it is overly rigid, and would lack the necessary agility and adaptability quickly to respond to the rapidly changing reality of AI development and use. The counter argument is that the regulatory framework proposed by the EU offers certainty and predictability to various actors operating in the AI field.

            It should also be noted that there is a proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive),[49] which is closely intertwined with the AI Act (in that the AI Act aims to address the safety of AI systems, whereas the AI Liability Directive addresses the position when an AI systems turns out to be unsafe, causing damage). The primary purpose of the AI Liability Directive is to make it easier for individuals harmed by AI systems to seek redress from those at fault. The AI Liability Directive will impose a rebuttable presumption of causality on an AI system (subject to certain conditions), in recognition of the fact that claimants may face significant challenges in establishing a causal link between the non-compliance with a duty of care, on the one hand,  and the output produced by an AI system, on the other.

            More broadly, the divergence of approaches to AI regulation proposed by the UK and the EU (as well as beyond) raises the question of the importance of international coordination. In its ninth interim report on “The governance of artificial intelligence” published on 31 August 2023,[50] the House of Commons Science, Innovation and Technology Committee identified twelve challenges to AI governance, of which one was “The international coordination challenge”. In that context, the report rightly noted that, because AI is a global technology, the  development of governance frameworks to regulate its uses must too be an international undertaking.

IV. AI-related legal issues in action: recent cases

                The breadth and depth of legal issues that arise in the context of AI are illustrated by the recent cases summarised below.

  1. Getty Images’ claim against Stability AI:[51] on 17 January 2023, Getty Images, a stock photo provider, issued a press release, in which it announced that it had commenced legal proceedings in the English High Court against Stability AI for intellectual property and copyright infringement,[52] following the release by Stability AI of Stable Diffusion, an AI-based system for generating images from text inputs, and image generator DreamStudio, last August.[53] Getty Images summarised its position as follows: “Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images absent a license [sic.] to benefit Stability AI’s commercial interests and to the detriment of the content creators”.[54] In its Particulars of Claim filed on 12 May 2023, Getty Images sought (among other things) declarations in respect of Stability AI’s alleged infringement of the Claimants’ intellectual property rights, an injunction against Stability AI, and an enquiry as to damages. In July 2023, Stability AI proceeded to file an application for summary judgment against the Claimants in respect of the allegations concerning copyright and database right infringement.[55] It is understood that, over  September and October 2023, the parties exchanged factual and witness evidence in advance of the Court’s hearing of the application.

Further, on 3 February 2023, Getty Images filed another claim against Stability AI, this time in the Delaware Federal Court, in which it accused Stability AI of misusing more than 12 million Getty photos to train its Stable Diffusion AI image-generation software.[56]

  1. Stephen L Thaler:[57] in its 2021 decision, the English Court of Appeal dismissed an appeal against the High Court's decision that a patent application by an individual which named an AI machine as inventor was rightly rejected by the IPO, because an inventor had to be a natural person under the Patents Act 1977 (see the discussion in section II above).[58] Mr Thaler appealed to the Supreme Court, arguing on 2 March 2023 that his two applications for patents over inventions devised by his “creativity machine”, DABUS, should be granted.[59] The Supreme Court judgment in the case in currently awaited.
  1. Rohingya’s claim against Meta: in December 2021, it was reported that Rohingya refugees from Myanmar filed a class action against Meta in California, claiming US$ 150 billion over Meta’s alleged negligent failure to prevent its platform from being used to facilitate genocide in Myanmar against the Rohingya people. It also appears that, across the Atlantic, a letter before action in respect of the same claims may have been submitted to Meta’s London office.[60]

V. AI development and use: some considerations for companies

                As the discussion above illustrates, the rapid development and deployment of AI systems may affect businesses in variety of ways. Navigating the existing regulatory landscape comprised of a patchwork of legal and regulatory requirements also presents challenges, which are likely to becomes more complex once the proposed UK and EU AI-specific regulatory regimes are come into force.

            Some of the questions that companies may want to ask themselves in this context are set out below. The list is by no means exhaustive, but does highlight the breadth and depth of AI related issues which are worth considering now in order to avoid regulatory and litigation issues in the future.

  1. Data protection: within the context of the UK and EU GDPR regimes, the correct characterisation of the AI provider as a data processor or data controller is of critical importance. Accordingly, this question – the answer to which can often be less than straightforward, owing to the complexities of AI systems – should be given appropriate consideration.

Further, as far as automated decision making discussed above is concerned, it may be prudent for businesses to consider what safeguards can be put in place prospectively, in view of the provisions contained in the Data Protection and Digital Information (No. 2) Bill.

The ICO November 2022 guidance “How to use AI and personal data appropriately and lawfully[61] offers further helpful guidance on complying with data protection requirements when using AI.

  1. IP and copyright: as outlined above, the use of AI introduces a significant element of uncertainty in how certain aspects of IP and copyright law are to be applied – for example, as to the meaning of the term  “undertaking necessary arrangements” in s. 9(3) of the CPDA. Accordingly, parties to agreements for the development and use of AI systems that may lead to the creation of new copyright work should consider including any necessary terms as to their ownership, assignment and licensing in the contractual documents. Similar observations apply in relation to AI-generated inventions and patent rights, albeit for a different reason than in the case of copyright works (namely, because the “actual deviser of the invention” should be a “person”).
  2. Fraud and misrepresentation: fraud can take a number of forms, and AI can unfortunately be utilised for nefarious purposes. However, one area where businesses may inadvertently expose themselves to claims for misrepresentation is using AI to generate automatic description of a product and service, without checking its accuracy. Another is overexaggerated claims about AI products themselves: in the US, for example, the Federal Trade Commission published, in February 2023, its guidance titled “Keep Your AI Claims in  Check”,[62] in which it warned companies that AI products must “work as advertised”. 
  3. Employment and governance: aside from data protection concerns, AI also presents other issues in an employment context. This includes the use of artificial intelligence as part of recruitment process: for example, to sift through CVs, perform automatic filtering of candidates through online assessment tests, and searching those candidates’ social media profiles for key terms. Because the use of AI tools increases the risk of discrimination[63] and unfair or irrational decision making, human intervention will remain key in minimising those risks. Ways should therefore be considered in which these two elements can be integrated most efficiently.

Further, in addition to human intervention, organisations should also consider how a particular AI algorithm has been trained. If training data is biased or – for historic reasons, for instance – lacks diversity, the AI algorithm will exhibit those issues too.  The design of AI algorithms may also give rise to similar concerns, since US designed systems may be less suitable for the UK and EU markets, given different legislative underpinnings. Businesses should also consider using fairness aware algorithms[64] in employment decisions.

Where AI based automation based on open-source frameworks such as “AgentGPT” and “BabyAGI” is used for decision making within organisations, this can affect governance structures, with corresponding regulatory consequences, because traditional allocations of responsibility and accountability within the organisation (to human decision makers, auditors, etc.) becomes more difficult.[65]  Businesses should therefore take time to review their policies to ensure that any role AI systems have in governance is clearly delineated, and that human actors retain control over the ultimate decision making.

  1. Impending regulatory changes: albeit changes to AI regulation are yet to be finalised, businesses would be well advised to begin assessing how their existing use of AI systems fit into the anticipated regulatory framework. To the extent that certain aspects of that use could present difficulties in the future, consideration should also be given to the steps that can be taken now to ensure future compliance. In the same vein, undertaking an inventory of long-term contracts that include AI elements could be useful in assessing whether, for example, certain provisions may become unenforceable – and, if so, whether there any  mechanisms are built into the contract to address this.

 

***

If you have any questions about the issues addressed in this memorandum, or if you would like a copy of any of the materials mentioned in it, please do not hesitate to contact:

 

Yasseen Gailani
Email: yasseengailani@quinnemanuel.com
Phone: +44 20 7653 2021

 

Anna Parfjonova
Email: annaparfjonova@quinnemanuel.com
Phone: +44 20 7653 2076

To view more memoranda, please visit www.quinnemanuel.com/the-firm/publications/
To update information or unsubscribe, please email updates@quinnemanuel.com

 

END NOTES: 

[1] https://researchbriefings.files.parliament.uk/documents/CDP-2023-0152/CDP-2023-0152.pdf

[2] https://www.ibm.com/design/ai/basics/ai/

[3] https://lordslibrary.parliament.uk/artificial-intelligence-development-risks-and-regulation/

[4]   Ahead of the AI Safety Summit, the Government also published, on 25 October 2023, its discussion paper on “Capabilities and risks from frontier AI”: https://assets.publishing.service.gov.uk/media/65395abae6c968000daa9b25/frontier-ai-capabilities-risks-report.pdf

[5] https://www.gov.uk/government/speeches/prime-ministers-speech-on-ai-26-october-2023

In his speech to the UN General Assembly on 22 September 2023, Oliver Dowden, UK’s Deputy Prime Minister, also warned that the rate of progress of AI technology would require countries to regularly meet to discuss the “necessary guardrails”, and that “global regulation is falling behind current advances”: https://www.gov.uk/government/speeches/deputy-prime-minister-oliver-dowdens-speech-to-the-un-general-assembly-22-september-2023

[6] When ChatGPT passed the Turing test (the essence of which is that if you could speak to a computer without knowing that you were not speaking to a human, the computer could be said to be artificially intelligent) – being only the second chatbot that was able to do so – it prompted significant reaction. This included the Future of Life Institute (FLI), an organization dedicated to minimising the risk and misuse of new technologies, publishing an open letter (the signatories of which included Elon Musk, Steve Wozniak, and Yuval Noah Harari) calling for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”.

[7] https://www.deepmind.com/research/highlighted-research/alphafold

[8] https://nucleus.iaea.org/sites/ai4atoms/ai4fusion/SitePages/AI4F.aspx

[9] https://tech.facebook.com/engineering/2022/4/sustainable-concrete/

[10] Like its competitor ChatGPT, a so-called ‘large language model’.

[11] https://raeng.org.uk/blogs/ai-blog-series-professor-dame-wendy-hall-freng-frs#:~:text=At%20this%20juncture%2C%20what%20questions,to%20build%20a%20nuclear%20bomb.

[12] https://inews.co.uk/news/politics/businesses-forced-ai-watermarks-rishi-sunak-2454835

[13] https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/07/ico-statement-on-worldcoin/

WorldCoin subsequently confirmed that it had prepared a Data Protection Impact Assessment with the help of an external law firm: https://www.pymnts.com/cryptocurrency/2023/worldcoin-catches-attention-united-kingdom-data-regulator/

[14] Article 4

[15] Article 22(1)

[16] Under Article 22(2), the exceptions are where the decision: “(a) is necessary for entering into, or performance of, a contract between the data subject and a data controller; (b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or (c) is based on the data subject’s explicit consent”.

[17] Article 22(3)

[18]  I.e., technologies that process data such as gaze tracking, sentiment analysis, facial movements, gait analysis, heartbeats, facial expressions and skin moisture. Examples of such technologies include monitoring the physical health of employees by offering wearable screening tools.

[19] https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/10/immature-biometric-technologies-could-be-discriminating-against-people-says-ico-in-warning-to-organisations/#:~:text=The%20Information%20Commissioner's%20Office%20(ICO,ICO%20expectations%20will%20be%20investigated.

[20] https://bills.parliament.uk/bills/3430

[21]https://questions-statements.parliament.uk/written-statements/detail/2023-03-29/hlws672: “As automated decision making systems are increasingly AI-driven, it is important to align the Article 22 reforms in the Data Protection and Digital Information Bill with the UK’s wider approach to AI regulation”.

[22]https://questions-statements.parliament.uk/written-statements/detail/2023-03-29/hlws672: “Meaningful involvement means a human’s participation must go beyond a cursory or ‘rubber stamping’ exercise - and assumes they understand the process and influence the outcome reached for the data subject”.

[23] S. 4

[24] https://researchbriefings.files.parliament.uk/documents/CBP-9817/CBP-9817.pdf

[25] Under the traditional UK approach, originality means that the author must have created the work through their own skill, judgment and individual effort and that it is not copied from other works (Ascot Jockey Club Ltd v Simons [1968] 64 WWR 411). Under EU law, the relevant question is whether a work is the “author's own intellectual creation” (Infopaq International A/S v Danske Dagblades Forening (Case C‑5/08) (Infopaq I)).

[26] S. 11.1

[27] S. 9(3)

[28] S. 178

[29] S. 16(3)

[30] S. 29A

[31] Artificial Intelligence and Intellectual Property: copyright and patents: Government response to consultation - GOV.UK (www.gov.uk)

[32] Artificial Intelligence: Intellectual Property Rights - Hansard - UK Parliament

House of Lords - At risk: our creative future - Communications and Digital Committee (parliament.uk)

The Government has also decided not to make changes to the existing provisions with respect to computer generated works and their duration, noting that because “the use of AI to generate creative content is still in its early stages, the future impacts of this provision are uncertain. It is unclear whether removing it would either promote or discourage innovation and the use of AI for the public good”: see Artificial Intelligence and Intellectual Property: copyright and patents: Government response to consultation - GOV.UK (www.gov.uk)

[33] S. 7(3)

[34] S. 7(3); this is also confirmed by the IPO in its Formalities Manual: “Where the stated inventor is an ‘AI Inventor’, the Formalities Examiner request a replacement F7. An ‘AI Inventor’ is not acceptable as this does not identify ‘a person’ which is required by law. The consequence of failing to supply this is that the application is taken to be withdrawn under s.13(2)”; see Chapter 3.05: Formalities Manual (online version) - Chapter 3: The inventor - Guidance - GOV.UK (www.gov.uk)

[35] AI Chatbots like ChatGPT to Face UK Scrutiny (aibusiness.com)

[36]   In his speech on 26 October 2023, Rishi Sunak noted that the UK will not “rush to regulate” AI, describing it as “a point of principle”, and adding: “how can we write laws that make sense for something that we don’t yet fully understand?”: https://www.gov.uk/government/speeches/prime-ministers-speech-on-ai-26-october-2023

[37] https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1176103/a-pro-innovation-approach-to-ai-regulation-amended-web-ready.pdf

[38] https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach#full-publication-update-history

[39] Reflecting the OECD's values-based AI principles: https://oecd.ai/en/ai-principles

[40]  The relevant Principles are as follows:

  1. Safety, security and robustness: AI systems should function in a robust, secure and safe way, and risks should be continually identified, assessed and managed.
  2. Appropriate transparency and explainability, where explainability means the extent to which it is possible for relevant parties to access, interpret and understand the decision-making processes of an AI system.
  3. Fairness: AI systems should not undermine legal rights, discriminate unfairly against individuals or create unfair market outcomes.
  4. Accountability and governance: governance measures should be in place to ensure effective oversight of AI systems, with clear lines of accountability established across the AI life cycle.
  5. Contestability and redress: users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates material risk of harm.

 

[41] The White Paper goes on to state that “[b]y defining AI with reference to these functional capabilities and designing our approach to address the challenges created by these characteristics, we future-proof our framework against unanticipated new technologies that are autonomous and adaptive”.

[42] The White Paper defines ‘foundation models’ as “an emerging type of general purpose AI that are trained on vast quantities of data and can be adapted to a wide range of tasks”.

[43] The White Paper also notes that, while it is not proposed that the Principles be put on a statutory footing “initially”, “[f]ollowing this initial period of implementation, …, we anticipate introducing a statutory duty on regulators requiring them to have due regard to the principles”.

[44]  The CMA Report was published on 18 September 2018: https://www.gov.uk/government/publications/ai-foundation-models-initial-report

[45] An update on the CMA’s thinking, including regarding how the principles have been adopted, will be published in early 2024.

[46]   The CMA defines those guiding principles as follows:

 

  1. Accountability: foundation model developers and deployers are accountable for outputs provided to consumers.
  2. Access: ongoing ready access to key inputs, without unnecessary restrictions.
  3. Diversity: sustained diversity of business models, including both open and closed.
  4. Choice: sufficient choice for businesses so they can decide how to use foundation models.
  5. Flexibility: having the flexibility to switch and/or use multiple foundation models according to need.
  6. Fair dealing: no anti-competitive conduct including anti-competitive self-preferencing, tying or bundling.
  7. Transparency: consumers and businesses are given information about the risks and limitations of foundation model generated content so they can make informed choices.

The CMA Report also contains examples of factors that could undermine those principles, including mergers and acquisitions which could lead to a substantial lessening of competition in the development or deployment of foundation models, firms using their leading positions in key markets to block innovative challengers who develop and use foundation models (for example, through misuse of vertical integration), and if consumers receive false and misleading content from foundation models services that impacts or is likely to impact their decision-making.

[47] Per Article 3(1), “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.

[48] https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206

[49] The AI Liability Directive forms part of a broader package of Commission proposals which also includes revisions to the Directive on Liability for Defective Goods. Further, it is also instructive to view the AI Liability Directive alongside the Commission’s Cyber Resilience Act, which, at a very high level, aims to set out the boundary conditions for the development of secure products with digital elements, and to create conditions allowing users to take cybersecurity into account when selecting and using products with digital elements.

[50] https://committees.parliament.uk/publications/41130/documents/205611/default/

[51] Getty Images (US), INC. and others v. Stability Al Limited, claim number IL-2023-000007

[52] https://newsroom.gettyimages.com/en/getty-images/getty-images-statement

[53] https://www.reuters.com/legal/getty-images-lawsuit-says-stability-ai-misused-photos-train-ai-2023-02-06/

[54] A few days earlier, a group of visual artists filed a lawsuit against Stability AI, as well as Midjourney Inc, and DeviantArt Inc in the San Francisco federal court. According to the lawsuit, Stability AI's Stable Diffusion software copied billions of copyrighted images to enable Midjourney and DeviantArt's AI to create images in those artists’ styles without permission: https://fingfx.thomsonreuters.com/gfx/legaldocs/myvmogjdxvr/IP%20AI%20COPYRIGHT%20complaint.pdf

[55]   On the basis that the Claimants have no real prospect of succeeding on those claims at trial.

[56]https://fingfx.thomsonreuters.com/gfx/legaldocs/byvrlkmwnve/GETTY%20IMAGES%20AI%20LAWSUIT%20complaint.pdf

[57] Stephen L Thaler v Comptroller-General of Patents, Designs and Trade Marks [2021] EWCA Civ 1374

[58] There was, however, a dissenting judgement from Birss LJ, in which he concluded that the appeal should be allowed.

[59] https://www.reuters.com/technology/uk-supreme-court-hears-landmark-patent-case-over-ai-inventor-2023-03-02/

[60] https://www.reuters.com/world/asia-pacific/rohingya-refugees-sue-facebook-150-billion-over-myanmar-violence-2021-12-07/

[61] https://ico.org.uk/media/for-organisations/documents/4022261/how-to-use-ai-and-personal-data.pdf

[62] https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check

[63] In its report titled “Technology Managing People – the legal implications”, the Trade Union Congress discussed how an algorithm (“a ‘logical’ rules-based process”) can be seen as a “provision, criterion or practice” within the meaning of section 19(1) of the EqA 2010 for the purpose of bringing an indirect discrimination claim: https://www.tuc.org.uk/sites/default/files/Technology_Managing_People_2021_Report_AW_0.pdf

 

[64] Fairness-aware modelling in machine learning refers to a method of developing AI algorithms designed to minimise bias and ensure fairness, thus reducing the risk of discriminatory decisions. Fairness-aware modelling is achieved through including mathematical definitions of fairness into the design of the algorithm: https://www.ornsoft.com/blog/what-is-fairness-aware-modeling/

[65] EY report “Adapting the UK’s pro-innovation approach to AI regulation for foundation models”: https://assets.ey.com/content/dam/ey-sites/ey-com/en_uk/topics/ai/ey-adapting-the-uks-pro-innovation-approach-to-ai-regulation-for-foundation-models.pdf