The EC’s High-Level Expert Group on Artificial Intelligence recently released the first draft of its Ethics Guidelines for Trustworthy AI. The Guidelines are intended to support the Europe’s ability to be a leader in AI and introduces the concept of “Trustworthy AI”. What marks this document out for product manufacturers is the practical guidance on implementing ethical principles via the design process, which we address below. The EC’s concept of Trustworthy AI has two parts: (i) it must have an “ethical purpose” via a respect for fundamental rights, applicable regulations, core principles and values; and (ii) the technology must be technically robust and reliable.
The Guidelines are broad and aimed at a wide range of stakeholders involved in all aspects of AI. The EC does not presently intend to make the Guidelines mandatory and is proposing a mechanism for voluntary endorsement by stakeholders. The EC has made clear that the Guidelines are intended to foster and not stifle innovation and to help foster Europe’s position as a leader in cutting-edge, secure and ethical AI that provides protections and benefits to individuals and the common good.
The Guidelines set the framework for the creation of Trustworthy AI: Chapter 1 sets out the core principles and values that all those dealing with AI should comply with – AI must be human-centric and grounded in fundamental rights and principles. Chapter 2 lists the requirements such that those principles are implemented. There are ten requirements, including accountability, data governance, non-discrimination, respect for privacy, and safety. Chapter 3 “operationalises” the requirements and introduces a series of “assessment lists” to help create Trustworthy AI. These lists provide practical questions for stakeholders to ask during the design process.
The focus on safety is of particular interest to product manufacturers. The Guidelines explain that safety is ensuring AI can achieve its objective without harming humans, resources or the environment. The EC asks stakeholders to start by defining what level of safety is applicable in the context of the proposed AI system. You would expect an AI-powered medical device will be held to a higher standard than an AI-powered home appliance. The Guideline include issues that will be familiar to product manufacturers, including minimising unintended consequences and errors and using risk assessments. One novel area is addressing the adaptability of AI. This is an area that has caused commentators a great deal of concern, with some questioning whether it poses a threat to existing principles of the allocation of liability. For now the Guidelines recommend that formal mechanisms are put in place to measure and guide these risks. This appears a sensible approach given products with embedded AI are a relatively nascent technology with insufficient volume to fully understand the risks and benefits.
The Guidelines are open for feedback from members of the public until 1 February and as per usual, if it is likely to affect your business in any way, you should think about getting involved. The EC intends to publish a final version of the Guidelines in March 2019 (although the Expert Group have been keen to emphasise that they should be seen as a “living document” and updated regularly to ensure they remain relevant).