The EU has published proposals on the regulation of artificial intelligence, which look to ensure a balance is struck between protecting consumers, and encouraging technological development. These include a resolution in relation to IP issues, an ethics framework for development, and liability rules setting fines of up to 2 million euros and a 30 year limitation period for certain claims. Read on for our overview of the key proposals.
Last week, the European Parliament adopted proposals on how to regulate AI. These are amongst the first detailed legislative proposals to be published internationally, so make for interesting reading for stakeholders all over the globe. The proposals cover three areas:
- ethics framework for AI
- liability for AI causing damage
- intellectual property rights
For product manufacturers working with AI, these proposals deserve careful consideration. In particular, those operating “high-risk” AI face the prospect of a robust new regulatory regime. The European Commission has said it will publish draft legislation next year addressing AI. Some of the European Parliament’s proposals, or variations on them, could well be adopted by the Commission. There will be opportunities for affected stakeholders to engage with any new AI laws throughout the usual legislative process, but efforts to understand how these proposals could impact your business should start now.
The European Parliament has set out a framework of ethical principles and legal obligations for the development, deployment and use of AI, robotics and related technologies.
All AI would need to be developed, deployed and used in accordance with European Union law, including the GDPR (Regulation (EU) 2016/679), along with voluntary standards. The European Parliament is proposing that national authorities should be appointed to monitor the ethics framework.
There are specific obligations for “high-risk” AI.
- High-risk is defined as a “significant risk” the AI will cause harm or injury considering the “specific use or purpose”, “the sector where they are developed, deployed, or used” and the “severity” of the damage.
- The annex provides guidance on high-risk sectors and purposes:
- High-risk sectors include employment, healthcare, transport, energy, defence, and finance.
- High-risk purposes include recruitment, medical treatments and procedures, automated driving, autonomous military systems, energy production and distribution, and emissions control.
- The European Parliament is proposing that high-risk AI should be identified following a risk assessment conducted by the national authorities appointed to monitor the ethics framework.
- The proposals would require high-risk AI to be developed and used in a way that:
- guarantees human oversight, and allows humans to regain control;
- has adequate cybersecurity, includes “fall-back plans” in case of safety or security issues, ensures reliable and accurate performance and makes it clear to users that they are interacting with AI systems;
- does not create bias or discrimination and respects social responsibility and gender equality;
- is environmentally sustainable; and
- respects users’ rights to privacy and protection of their data.
- High-risk AI would also be subject to assessment and certification by national supervisory bodies to ensure compliance with the requirements of the framework.
- Compliance could also be policed by a right of redress, which would allow any person to bring a claim for injury or harm caused by AI systems that breach the obligations set out in the framework.
Liability for AI
The liability proposals also adopt a two-tier approach with regimes for operators of (1) “high-risk” AI systems, and (2) other AI systems.
The definition of an AI system is broad: “a system that is either software-based or embedded in hardware devices, and that displays behaviour simulating intelligence by, inter alia, collecting and processing data, analysing and interpreting its environment, and by taking action, with some degree of autonomy, to achieve specific goals”
“High-risk” AI systems
- The definition of “high-risk” under the civil liability proposals differs from the definition under the ethics framework. High-risk is defined as a “significant potential” to cause harm or damage that is “random and goes beyond what can reasonably be expected”. The significance will depend on the severity of possible harm, the degree of autonomy of decision-making, the likelihood of the risk materialising and the context in which the product is being used. The annex with detail on high-risk AI systems and critical sectors has not been published at the date of posting this blog.
- Operators of “high-risk” AI systems would be strictly liable for any harm or damage caused by an activity, device or process driven by that AI system.
- Claimants would not need to establish that the operator was at fault, and due diligence would not be a defence. Any attempt to exclude liability in agreements with users would be void. However, operators will not be held liable if harm was caused by force majeure (i.e. extraordinary and wholly unforeseeable events beyond the control of the parties).
- Under the proposals, claimants will be able to recover up to 2 million euros for death or personal injury, and 1 million euros for economic loss or damage to property. Claims will be subject to a special limitation period of 30 years (significantly longer than the 10 year long stop under the EU’s Product Liability Directive) but comparable with limitation periods for injuries arising from nuclear incidents.
- The proposals introduce the concept of “frontend” and “backend” operator:
- A frontend operator “exercises a degree of control over a risk connected with the operation and functioning of the AI-system and benefits from its operation”. It is likely to capture the producer but could arguably capture other economic actors as well.
- A backend operators “on a continuous basis… defines the features of the technology and provides data and an essential backend support service and therefore also exercises a degree of control over the risk connected with the operation and functioning of the AI-system”. Where a backend operator is also a producer, the proposal envisages that the Product Liability Directive will take precedence.
- Mandatory insurance would be required by “frontend” operators to provide cover for the compensation specified in the proposals and by “backend” operators to cover business liability or product liability events.
Other AI systems
- Other AI systems would be subject to a fault-based liability regime. If the operator can establish that the AI system was activated without its knowledge (and that reasonable measures were taken to avoid this), or that due diligence was taken with regards to the AI system, the operator may be able to establish that it was not liable for the harm. Member States will set the level of fines and length of limitation for these claims.
For both systems, the proposal preserves the right to argue contributory negligence.
Intellectual Property Rights
The European Parliament has also put forward a resolution in relation to creating an effective intellectual property rights system and safeguards for the EU’s patent system to protect AI innovations. This resolution covers copyright, data collection, trade secrets, the use of algorithms and deep fakes. It also distinguishes between AI-assisted human creations and AI-generated creations.