As we approach the end of 2024, Oliver Wannell looks back at the last 12 months in AI and Law.
Table of contents
- February: Government response to White Paper of AI Regulation
- July: The King’s Speech
- July: Department for Science Innovation and Technology AI Action Plan
- July: Court of Appeal overturns High Court decision to determine that ANNs are not patentable
- September: European Framework Convention on AI
- November: Trump wins race to the White House
- November: Government statement on online safety
- Our top lookouts for 2025
February: Government response to White Paper of AI Regulation
In February, the Government unveiled its long-awaited response to the 2023 White Paper on AI Regulation. As anticipated, the Government Response was pro-innovation, opting for an outcomes-based, ‘regulation-lite’ approach over legally binding rules and regulations. Rejecting “unnecessary blanket rules,” the Government instead committed to five cross-sectoral principles:
-
-
- Safety, security, and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance and
- Contestability and redress.
-
Balancing the desire for innovation with the need to regulate AI, and doing so in a way that supports the UK’s other sectors, is clearly challenging. The Government Response anticipates that this challenge requires a contextual approach to regulation, where existing regulators interpret and apply the five principles within their own regulatory remits. Despite widespread industry support for this approach, responses to the consultation also stressed the risks of regulatory overlap, gaps, and poor coordination, which are a continuation of the patchwork approach that we have seen so far, inspiring little confidence in the UK AI market.
The Government Response sets out plans for a central function to overcome this risk by supporting regulators and coordinating their approach. This includes:
-
-
- central regulatory guidance to support regulators in implementing the principles.
- a cross-economy AI risk register to support regulators’ internal risk assessments and
- a central monitoring and evaluation framework to assess cross-economy and sector-specific impacts of the new regime.
-
The government has also confirmed its intention to convene regulators to deliver joint regulatory guidance and possibly a joint regulatory sandbox. However, as regulators are already set to work on their implementation, the government’s central function certainly feels a step behind.
Looking ahead to 2025 we anticipate many more regulators setting out their sector-specific approach to regulating AI. We will keep a keen eye on the government’s central function as it develops.
July: The King’s Speech
Although regulating AI has been on the Government’s agenda for almost a decade as it has affirmed and reaffirmed its commitment to making the UK a global hub of innovation, progress towards a UK regulatory framework for AI has been underwhelming. After the government response in February, it seemed that a full regulatory framework might not be the end goal. However, from the King’s speech in July, the new government looks to be taking a different approach. Whereas the Conservative government took a ‘hands-off’ approach, focusing only on regulating the use of AI, Labour’s King’s Speech included new plans to regulate AI itself.
The Government plans to establish “the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”. What this means in detail remains to be seen. But clearly, there has been a change of direction from the previous Government’s refusal to introduce binding measures on AI developers (at least in the short term) to the new Government’s proposal for a set of measures in its first parliamentary term. This was anticipated by the new Technology and Science Secretary Peter Kyle’s statement earlier in the year that if elected, Labour would introduce a statutory code requiring AI companies to share testing data with the Government. The new Government’s plans to regulate AI also featured in their manifesto pledge to “ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models.”
Looking ahead to 2025 we await the details of the Government’s proposed regulations and, in all likelihood, a consultation ahead of any potential new legislation. It is clear that regulating AI is on the new Government’s agenda and we are keen to see how it plans to deal with all the complexities that it brings.
July: Department for Science Innovation and Technology AI Action Plan
Also in July, the Department for Science, Innovation and Technology commissioned an AI Action Plan, whilst the Secretary of State for Science, Innovation and Technology appointed Matt Clifford Chair of the Advanced Research and Invention Agency.
Mr Clifford was appointed to lead the Government’s work to “identify ways to accelerate the use of AI to improve people’s lives by making services better and developing new products.” This work is coupled by the establishment of an AI Opportunities Unit to “bring together the knowledge and expertise to take full advantage of AI and implement recommendations from the Action Plan.”
Looking ahead to 2025 we should start to see some outcomes from the AI Opportunities Unit which we anticipate may focus on the UK’s infrastructure to support start-ups and scale-ups within the AI sector.
July: Court of Appeal overturns High Court decision to determine that ANNs are not patentable
Comptroller General of Patents, Designs and Trade Marks v Emotional Perception AI Limited [2024] EWCA Civ 825
Also in July, the Court of Appeal handed down a judgment that AI inventions comprising artificial neural networks (ANNs) do fall within the exclusion from patentability contained within the UK Patents Act. Overturning the High Court’s decision of November 2023, the Court of Appeal has now determined that ANNs are computer programs and are therefore not patentable in the UK unless they fall outside the exclusion by making a wider technical contribution.
ANNs are AI systems that replicate the human brain by connecting processing elements (also known as neurodes or perceptrons) to each other in a non-linear way. This enables the system to move beyond the tradition “if” – “then” model to process multiple data simultaneously. When trained, ANNs develop their weightings that simulate neural pathways within the brain and enable the system to learn.
Emotional Perception AI (EPAI) developed an ANN to provide media file recommendations based on analyses of human emotions. So, through facial expressions, voice patterns, and other biometric data, it provides music and other media recommendations that correspond with the user’s mood. The UK Intellectual Property Office (UKIPO) deemed this as unpatentable under the Patents Act 1977 which excludes “a program for a computer […] as such” from being patented unless it can be shown to have technical effect beyond that program.
In a significant blow to the UK AI industry, and overturning the High Court decision, the Court of Appeal agreed with the UKIPO’s initial decision. This brings the UK back in line with the more restrictive approach taken by the European Patent Office (EPO) that AI inventions will not be patentable.
Looking ahead to 2025 this decision brings us back to where we were pre-High Court decision in 2023. We don’t anticipate any movement from the Court of Appeal decision through the courts, but we are keen to see whether the patentability of AI inventions will be on the government’s agenda now a fresh approach to AI regulation seems likely.
September: European Framework Convention on AI
In September, the EU and 9 non-EU countries, including the UK and USA, signed the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law. Like the EU’s AI Act, the Convention is a world first and cements the EU’s place as pioneers in AI regulation.
Chapter I of the Convention sets out its purpose “to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law.” It also sets out the requirement for signatories to “adopt or maintain appropriate legislative, administrative or other measures to give effect to [its] provisions.”
Chapter II of the Convention contains general obligations to ensure that:
- AI activities are consistent with national and international human rights obligations.
- AI systems are not used to undermine “democratic institutions and processes.”
- “Fair access to and participation in public debate” and the individual’s “ability to freely form opinions” are protected.
Chapter III of the Convention sets out principles which range from specific principles relating to privacy and personal data (Article 11), to general principles on “human dignity and individual autonomy” (Article 7).
The Convention principally applies to public authorities, and to private actors acting on behalf of public authorities, but individual states have the option to determine its application to private actors more generally. It includes a follow-up mechanism – the Conference of the Parties – comprising official representatives of the signatories to determine the extent to which its provisions are being implemented.
Looking ahead to 2025 the UK governments have already published several guidance documents on the use of AI within the public sector and we don’t anticipate significant changes to those as a result of the Convention. It will be interesting to see if there will be any reference to the Convention within Labour’s plans for incoming AI protections, particularly within the private sector where we anticipate some divergence between signatories.
November: Trump wins race to the White House
November saw the election of Donald Trump back to the White House. The Trump campaign had its own awkward run-ins with AI earlier in the year as it reportedly created deepfake celebrity endorsements and black Trump supporters. But where does the President-elect stand on AI regulation?
President Trump has already indicated that he will scrap Biden’s executive order on AI security which included requirements for developers to share safety test results with the government, guidance for the government’s use of AI and a final rule on non-discrimination within AI based healthcare programmes. The 2024 Republican platform claims that the Order “hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology”.
Suresh Venkatasubramanian, professor at Brown University and former assistant director in the White House Office of Science and Technology Policy, predicts that Trump’s election will mean “an aggressive push towards more innovation in AI” and no US AI Act.
Looking ahead to 2025 we are keen to see what Trump’s pro-innovation approach to AI looks like and how the change of administration affects the US relationship with Europe and the recently signed European Framework Convention on AI.
November: Government statement on online safety
Also in November, the Government published a Draft Statement of Strategic Priorities for online safety. In it, Technology Secretary Peter Kyle outlined five priorities for online safety:
- Safety by design
- Transparency and accountability
- Agile regulation
- Inclusivity and resilience
- Technology and innovation.
These priorities are set for Ofcom to interpret as the independent regulator for online safety. However, the Government is clear that it should include a robust regulatory framework to monitor and tackle emerging harms, including harms from GenAI.
Read more about the draft statement here.
Looking ahead to 2025 we await Ofcom’s response to the statement and to see how it proposes to regulate the sector within its current regulatory powers. At present, the Government has indicated that no further powers will be required to achieve the statement’s aim, but it is open to future legislation if the regulator requires it.
Our top lookouts for 2025:
The Government sees the potential for the UK’s AI market, predicted to grow to more than 1 trillion USD by 2035, but safely harnessing that potential remains a foremost challenge. 2025 will be an important year for the Government to flesh out its plans for greater regulation. We’re still some way from a comprehensive regulatory framework for AI, but here’s what we think 2025 will bring:
- There is almost certainly going to be more consultation, this time with a focus on regulating AI developers themselves as the government prepares for AI regulation and consumer protections.
- Regulators will continue to set out their own sector specific approach to regulating AI, whilst the Government’s central coordinating role continues to unfold. Our AI and regulatory experts are ready to guide you through emerging regulations as they evolve.
- As Trump returns to the White House the direction his administration takes on AI regulation will be critical to US tech companies and, perhaps, investors and regulators around the world.
- The government has not given any indication that it will announce a policy decision on the legal protections afforded to AI creations. In lieu of a steer from policy makers, we will be on the lookout for more intellectual property litigation as the courts continue to navigate this complex and evolving scene.
How can we help?
Our tech experts at Capital Law are here to support you in 2025 and beyond. Please get in touch with Oliver Wannell in our Commercial Disputes team who will be able to support you.