On 23 May 2022, a report on the impact of artificial intelligence (“AI”) on product safety was published by the UK’s Office for Product Safety and Standards (“OPSS”) (the “Report”). Cooley was asked to contribute insights to this Report which examines the use of AI in consumer products and its implications for product safety and liability. The Report is over 100 pages long, so this blog will provide an overview of key points, including the safety benefits and challenges of incorporating AI into the design and manufacture of products.

What is an ‘AI product’?

AI is a broad term referring to technology that can sense its environment, take action in response and learn. In essence, AI seeks to replicate human problem-solving and decision-making abilities. In practice, the term “AI” is used to refer to a wide range of applications from simple algorithms to machine learning. The Report draws a critical distinction between AI and automated products. Whilst AI decisions tend to evolve over time, trained and constantly learning from the information the AI system receives, automated products are pre-set and programmed to carry out a task in a pre-determined way.

AI and product safety

The Report identifies a number of benefits AI might bring to consumer safety, including:

  • The prevention of mass product recalls through enhanced data collection during industrial assembly, enabling the identification of non-conforming events on a product line, improving inspection, and monitoring post-purchase data.
  • Safer product design with algorithms trained to develop only safe and standard-compliant solutions, allowing designers and engineers to focus on other design aspects.
  • Safer customer usage with AI customer services able to answer queries and provide recommendations on safe usage.
  • Customisation and personalisation through the training of algorithms, enabling producers to anticipate consumers’ needs better, based on the data they generate.
  • Detecting, analysing and preventing cyber-attacks to protect consumer safety and privacy.

On the other hand, the Report also flags that AI can bring its own challenges:

  • AI systems may not act as intended. Autonomous by design, there is an inherent unpredictability which creates potential safety risks and further complicates the attribution of responsibility.
  • Data-driven algorithms are only as good as their data-inputs. The risk of malfunction, causing physical injury or leading to discrimination and human rights violations, increase without proper consideration of the training and input data.
  • AI products may be vulnerable to cyber-attacks as AI is normally dependent on external platform providers.

The Report noted that while current UK product safety regulations can be applied to many existing AI consumer products, there are shortcomings, including:

  • AI software is not explicitly included or excluded from definitions in UK law.
  • Product safety has traditionally considered risks to the physical health and safety of the consumer. However, AI consumer products pose risks of immaterial harms (such as psychological harm) or indirect harms from cyber security vulnerabilities.
  • The legislative focus is on one-time compliance when a product is ‘placed on the market’. This may not be sufficient where an AI product has the potential to change autonomously once in the hands of a consumer.
  • As the supply chain for consumer products increases in complexity there are potential challenges in understanding the responsibilities of different economic operators.
  • As standards largely do not yet consider the use of AI in devices, manufacturers, conformity assessment bodies and authorities have limited guidance on AI product compliance issues.

The Report sets out a framework to aid consideration of the effects of AI on consumer product safety and liability. The framework highlights key characteristics of AI (mutability, opacity, data needs and autonomy) and identifies potential associated challenges. Its aim is to guide policymakers when evaluating and developing product safety and liability policy for AI consumer products. The considerations set out also provide a useful basis for product-related AI risk assessments by economic operators.

AI and product liability

The Report explains that the hypothetical application of the UK’s product liability rules to AI products is a challenge. It remains unclear how these rules would apply to AI products which can undergo changes in how they operate after their placing on the market (e.g. through interaction with consumers and their data via machine learning). It is also uncertain to what extent manufacturers should be held liable for decisions made by an autonomous system, for damages which could not have been predicted or where a larger number of actors (including data providers or third-party platforms) involved in design and manufacture obfuscate the allocation of liability. AI also relies on complex algorithms that are opaque and can be difficult for third parties to understand – a further challenge to identifying the source of potential harm and attributing liability.

Approaches to tackling AI risks in consumer products

The Report notes that issues with consumer products may become more pronounced with advancements in AI. It discusses initiatives and tools that are already seeking to address related shortcomings in current UK laws on safety and liability:

  • Standards: these allow transparency and promote trust in the application of technologies. Equally, standards support communication between all parties involved by using uniform terms and concepts. AI standards could be developed as a tool either for self-regulation by the industry, who can define for themselves the requirements for products, or by policymakers.
  • Industry led approaches: several professional associations and consortia already publish specifications and recommendations on AI. These range from broad codes of conduct to product-specific voluntary standards. The Report found a notable emphasis on ethical, human-centric AI systems which respect human rights and democratic values.

Will the UK be the first to regulate AI?

While countries, such as the UK, have been hesitant to regulate AI products for fear of obstructing innovation, the introduction of regulations by first movers is likely to be influential (as was the case with the EU’s GDPR). The UK is not the only country considering the need for legislative change.

  • EU: the Proposal for a Regulation laying down harmonised rules on AI (AI Act), published 21 April 2021, envisages obligations for economic operators that place AI on the European market as well as those AI systems located in third countries whose output is used in the EU. Corresponding changes to other EU regimes (such as for Machinery and Medical Devices) are expected. Although the final UK government approach to AI remains unclear, there will be a strong impetus for it to follow a similar model to that in the EU.
  • US: under the current administration, the Consumer Product Safety Commission (“CPSC”) aims to shift away from the previous government’s deregulation approach and move towards increased scrutiny and enforcement of consumer products. The CPSC is considering existing safety standards for AI and may be influenced by European legislative activity.
  • China and Japan: in contrast to the EU’s approach, China and Japan are focusing on produce guidelines, developed through industry consultation, that address key principles.

Despite AI’s seismic impact, both realised and potential, recognised by the Report, significant barriers to adoption for AI (cost, privacy and awareness) remain. Going forward, the Report advocates a more transparent approach to AI systems with greater consideration given to the data used for training, testing and validation purposes. At the same time, it highlights the need for more regulation in this area to provide certainty for economic operators and consumers. The introduction of new regulation will need to be balanced with the need to foster innovation and not duplicate, or cut across, existing legal frameworks, such as those relating to product safety and privacy. Whatever approach is taken, AI is a fast developing field which may fundamentally change the product safety and liability landscape. Stay tuned for future updates.  

Posted by Cooley