This blog provides a summary of three recent developments in the regulation of artificial intelligence (AI) in Europe, which will affect all stakeholders who incorporate AI technologies into hardware or software products.

1. European Union AI Act approved by European Parliament

What has happened?

On 13 March 2024, the European Parliament provisionally approved the Artificial Intelligence Act (AI Act), after it was negotiated and agreed amongst member states at the end of 2023. The AI Act will govern the development, placing on the market and use of AI systems in the EU, and we expect the general-purpose rules to apply from May 2025 (with longer implementation periods for other provisions).

Why does it matter?

The AI Act is a completely new, bespoke AI regulatory regime. It’s the first law in the world to specifically regulate AI on a horizontal basis and adopts a risk-based approach, where AI systems which pose higher risk will be subject to stricter rules than systems which are deemed to be lower risk. It establishes three categories:

  • Unacceptable risk. This category bans AI systems which pose a threat to the fundamental rights of citizens – such as biometric categorization systems that use sensitive characteristics, emotion recognition in the workplace and schools, social scoring, predictive policing and AI which can manipulate human behaviour.
  • High risk. AI systems which may cause significant harm to health, safety, fundamental rights, the environment, democracy and the rule of law must follow stricter rules before they can be placed on the market. Such systems might include critical infrastructures, certain law enforcement systems, border control or democratic processes, and the administration of justice. Importantly, they also can include AI systems performing a safety function in certain products.
  • Minimal risk. Most AI systems are expected to fall into this category with no threat caused to fundamental rights of citizens. These will not be subject to regulatory obligations under the AI Act.

The AI Act also introduces requirements on all companies to be transparent about the use of AI systems and guardrails for general-purpose AI.

There are strong sanctions available for noncompliance, with fines up to 35 million euros, or 7% of global turnover. The EU also will set up an AI Office within the European Commission to oversee EU-wide enforcement.

The AI Act will now go to the European Council for approval before it is expected to be adopted ahead of the next parliamentary elections in June 2024. It then will be published in the Official Journal of the European Union and enter into force 20 days later. It will generally apply 24 months after it has entered into force (with general-purpose rules applying after 12 months and some provisions applying after a longer transition period of 36 months).

2. EU AI Liability Directive moves forward

What has happened?

The European Council has recently confirmed that discussions on the proposed AI Liability Directive (AILD) ‘will be resumed now that agreement has been reached in negotiations on the closely linked AI Act. These discussions are in their early stages, with neither the European Council nor European Parliament having commenced their negotiating mandates. We therefore expect any further decisions to roll over until the next term of Parliament.

Why does it matter?

The AILD will sit alongside the AI Act as part of EU policymakers’ overall strategy to protect consumers from the perceived risks of AI technologies. The AILD will supplement national rules on fault-based liability – for example, by empowering national courts to order companies to disclose information about high-risk AI systems and by introducing a rebuttable presumption that an AI system gave rise to damage where there is noncompliance with applicable AI regulations.

Taken together with the amended Product Liability Directive (PLD), which includes new provisions designed to cover AI systems, and the EU’s new class action regime (which the AILD and PLD are in scope of), the changes will increase litigation risk for companies that design and/or deploy AI.

3. UK AI private members’ bill reaches second reading

What has happened?

In the UK, a private members’ bill for an AI regulation had its second reading in the House of Lords at the end of March 2024, having been introduced in late November 2023. A private members’ bill is a legislative instrument that can be proposed by individual members of the House of Commons or Lords outside of official government policy.

Why does it matter?

The private members’ bill proposes a more interventionist approach to regulating AI than that taken by the UK government, which has deliberately taken a ‘light-touch’ approach to regulating AI.

The UK government has argued that specific new legislation is not required, and that it’s sufficient to issue guidance and principles together with a series of tests that need to be met before new laws can be passed. This strategy is designed to be pro-innovation and to help foster the development of AI.

The private members’ bill, by contrast, proposes the following:

  • Establishing a UK ‘AI authority’ to ensure UK regulators work together for more effective AI regulation and enforcement.
  • Requiring a ‘designated AI officer’ in businesses that develop, deploy or use AI.
  • Enacting regulations to lay down various requirements, including:
    • For any person supplying a product or service involving AI to give customers clear and unambiguous health warnings, labelling, and opportunities to give or withhold informed consent in advance.
    • For any business which develops, deploys or uses AI to allow independent third parties accredited by the AI authority to audit its processes and systems.
    • For any person involved in training AI to supply certain information to the AI authority.

The private members’ bill has received considerable support and suggests there is concern amongst some policymakers about the current ‘light-touch’ approach to regulating AI. With an upcoming election, there is a very real chance that the UK strategy for regulating AI could change before the end of the year – with a much more strictly regulated approach likely to be on the horizon if there is a change in government, as is expected.

Posted by Cooley