On 19 February 2020, the European Commission released a White Paper with its long awaited proposals on regulating artificial intelligence (“AI”).
The White Paper contains far-reaching proposals that, if adopted, could have a significant impact on all product manufacturers and developers working in AI. The White Paper raises the prospect of requirements being imposed both at the design stage and once products are on the market together with changes to safety and liability legislation to account for perceived risks posed by AI. For certain “high risk” sectors such as healthcare, transport, energy and certain public sectors, or use cases affecting workers’ (and possibly consumer) rights, or involving biometric checks or surveillance, mandatory requirements are on the cards.
The Commission notes that AI is already subject to existing EU legislation (including on data protection, consumer law, product safety and liability, amongst others). However, existing EU legislation may not cover all risks AI brings, exposing regulatory weaknesses and gaps.
Definition
A key issue for the future regulatory framework is the definition of AI, which is currently not defined under existing EU legislation. The European Commission notes that any definition will require flexibility in the face of technical progress as well as legal certainty.
High risk applications
The Commission has proposed a risk based approach for its regulation. The approach under consideration would be to identify high risk applications by taking an exhaustive list of sectors (e.g. healthcare, transport, energy, and parts of the public sector) combined with the risk of significant impact on legal rights, injury, death, or significant material damage. The Commission has also flagged some uses that should always be defined as high risk, such as the use of AI in ways that affect workers’ (and possibly consumer) rights or the use of AI for biometric identification or for surveillance.
It would include legally binding requirements for developers and users of AI, building on existing EU legislation. The European Commission highlights that this would be a targeted approach, not imposing any new additional administrative burdens on applications that are deemed low risk where existing provisions of EU legislation would apply.
Requirements
The European Commission has identified a number of requirements that would be mandatory for high risk use cases. It envisages that the following key features would be covered by standards:
- training data;
- data and record-keeping;
- information to be provided;
- robustness and accuracy;
- human oversight; and
- specific requirements for certain particular AI applications, such as those used for purposes of remote biometric identification.
The Commission also suggest that is may be necessary to specify economic actors within the supply chain who would bear responsibility for these different obligations. This would be without prejudice to existing rules that impose liability for defective products on the producer.
The Commission envisages that these requirements will be subject to prior conformity assessments with procedures for testing, inspection or certification. This is likely to pose a significant burden for SMEs and the Commission has recognised that structures will need to be put in place to ensure that innovation is not undermined.
Safety and Liability framework
The “Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and Robotics” accompanies the White Paper and assesses the impact of AI, IoT and robotics on existing areas of EU product safety and liability legislation and highlights where the Commission considers amendments are required. This ties in with the Commission’s review of safety and liability legislation under its Work Programme 2020 (blogged here).
The Commission is considering specific amendments to individual pieces of EU legislation, applying a targeted risk-based approach.
We will be taking a closer look at these proposals in a future blog.
Non-high risk applications
For lower risk applications, the European Commission has proposed a voluntary labelling scheme. Options include signing up to the mandatory requirements
or to similar requirements especially established for the purposes of the voluntary scheme. Adherents to the scheme would be awarded a quality label to use with their AI applications.
Governance
Under this option national authorities would be entrusted with the implementation and enforcement of the future regulatory framework.
Consultation and next steps
The European Commission is inviting comments on its proposals set out in the White Paper. The consultation is open until 19 May 2020 and can be found here. This is an important opportunity for stakeholders to have their say and shape the future of EU policy in this area. Amendments to existing EU product safety and liability legislation may have a broader scope and have an impact on products that do not incorporate AI or other digital technologies.
The Commission then intends to follow-up with a legislative proposal and impact assessment towards the end of this year.
Further information
The European Commission’s press release announcing the White Paper is available here.
Check in soon for Part 2, which will focus on the European Commission’s accompanying “Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and Robotics”.